Nov 25 09:00:20 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 25 09:00:20 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 25 09:00:20 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 09:00:20 localhost kernel: BIOS-provided physical RAM map:
Nov 25 09:00:20 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 25 09:00:20 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 25 09:00:20 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 25 09:00:20 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Nov 25 09:00:20 localhost kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Nov 25 09:00:20 localhost kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Nov 25 09:00:20 localhost kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Nov 25 09:00:20 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 25 09:00:20 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 25 09:00:20 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Nov 25 09:00:20 localhost kernel: NX (Execute Disable) protection: active
Nov 25 09:00:20 localhost kernel: APIC: Static calls initialized
Nov 25 09:00:20 localhost kernel: SMBIOS 2.8 present.
Nov 25 09:00:20 localhost kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Nov 25 09:00:20 localhost kernel: Hypervisor detected: KVM
Nov 25 09:00:20 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 25 09:00:20 localhost kernel: kvm-clock: using sched offset of 3146699042 cycles
Nov 25 09:00:20 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 25 09:00:20 localhost kernel: tsc: Detected 2445.406 MHz processor
Nov 25 09:00:20 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 25 09:00:20 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 25 09:00:20 localhost kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Nov 25 09:00:20 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 25 09:00:20 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 25 09:00:20 localhost kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Nov 25 09:00:20 localhost kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Nov 25 09:00:20 localhost kernel: Using GB pages for direct mapping
Nov 25 09:00:20 localhost kernel: RAMDISK: [mem 0x2ed25000-0x3368afff]
Nov 25 09:00:20 localhost kernel: ACPI: Early table checksum verification disabled
Nov 25 09:00:20 localhost kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Nov 25 09:00:20 localhost kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:00:20 localhost kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:00:20 localhost kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:00:20 localhost kernel: ACPI: FACS 0x000000007FFDFC80 000040
Nov 25 09:00:20 localhost kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:00:20 localhost kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:00:20 localhost kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:00:20 localhost kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Nov 25 09:00:20 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Nov 25 09:00:20 localhost kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Nov 25 09:00:20 localhost kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Nov 25 09:00:20 localhost kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Nov 25 09:00:20 localhost kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Nov 25 09:00:20 localhost kernel: No NUMA configuration found
Nov 25 09:00:20 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Nov 25 09:00:20 localhost kernel: NODE_DATA(0) allocated [mem 0x27ffd3000-0x27fffdfff]
Nov 25 09:00:20 localhost kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Nov 25 09:00:20 localhost kernel: Zone ranges:
Nov 25 09:00:20 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 25 09:00:20 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 25 09:00:20 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000027fffffff]
Nov 25 09:00:20 localhost kernel:   Device   empty
Nov 25 09:00:20 localhost kernel: Movable zone start for each node
Nov 25 09:00:20 localhost kernel: Early memory node ranges
Nov 25 09:00:20 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 25 09:00:20 localhost kernel:   node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Nov 25 09:00:20 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000027fffffff]
Nov 25 09:00:20 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Nov 25 09:00:20 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 25 09:00:20 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 25 09:00:20 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 25 09:00:20 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 25 09:00:20 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 25 09:00:20 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 25 09:00:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 25 09:00:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 25 09:00:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 25 09:00:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 25 09:00:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 25 09:00:20 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 25 09:00:20 localhost kernel: TSC deadline timer available
Nov 25 09:00:20 localhost kernel: CPU topo: Max. logical packages:   4
Nov 25 09:00:20 localhost kernel: CPU topo: Max. logical dies:       4
Nov 25 09:00:20 localhost kernel: CPU topo: Max. dies per package:   1
Nov 25 09:00:20 localhost kernel: CPU topo: Max. threads per core:   1
Nov 25 09:00:20 localhost kernel: CPU topo: Num. cores per package:     1
Nov 25 09:00:20 localhost kernel: CPU topo: Num. threads per package:   1
Nov 25 09:00:20 localhost kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Nov 25 09:00:20 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 25 09:00:20 localhost kernel: kvm-guest: KVM setup pv remote TLB flush
Nov 25 09:00:20 localhost kernel: kvm-guest: setup PV sched yield
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 25 09:00:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 25 09:00:20 localhost kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Nov 25 09:00:20 localhost kernel: Booting paravirtualized kernel on KVM
Nov 25 09:00:20 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 25 09:00:20 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Nov 25 09:00:20 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Nov 25 09:00:20 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u524288 alloc=1*2097152
Nov 25 09:00:20 localhost kernel: pcpu-alloc: [0] 0 1 2 3 
Nov 25 09:00:20 localhost kernel: kvm-guest: PV spinlocks enabled
Nov 25 09:00:20 localhost kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Nov 25 09:00:20 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 09:00:20 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 25 09:00:20 localhost kernel: random: crng init done
Nov 25 09:00:20 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 25 09:00:20 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 25 09:00:20 localhost kernel: Fallback order for Node 0: 0 
Nov 25 09:00:20 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 25 09:00:20 localhost kernel: Policy zone: Normal
Nov 25 09:00:20 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 25 09:00:20 localhost kernel: software IO TLB: area num 4.
Nov 25 09:00:20 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Nov 25 09:00:20 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 25 09:00:20 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 25 09:00:20 localhost kernel: Dynamic Preempt: voluntary
Nov 25 09:00:20 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 25 09:00:20 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 25 09:00:20 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Nov 25 09:00:20 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 25 09:00:20 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 25 09:00:20 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 25 09:00:20 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 25 09:00:20 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Nov 25 09:00:20 localhost kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 25 09:00:20 localhost kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 25 09:00:20 localhost kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 25 09:00:20 localhost kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Nov 25 09:00:20 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 25 09:00:20 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 25 09:00:20 localhost kernel: Console: colour VGA+ 80x25
Nov 25 09:00:20 localhost kernel: printk: console [ttyS0] enabled
Nov 25 09:00:20 localhost kernel: ACPI: Core revision 20230331
Nov 25 09:00:20 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 25 09:00:20 localhost kernel: x2apic enabled
Nov 25 09:00:20 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 25 09:00:20 localhost kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Nov 25 09:00:20 localhost kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Nov 25 09:00:20 localhost kernel: kvm-guest: setup PV IPIs
Nov 25 09:00:20 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 25 09:00:20 localhost kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Nov 25 09:00:20 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 25 09:00:20 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 25 09:00:20 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 25 09:00:20 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 25 09:00:20 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 25 09:00:20 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 25 09:00:20 localhost kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Nov 25 09:00:20 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 25 09:00:20 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 25 09:00:20 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 25 09:00:20 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 25 09:00:20 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 25 09:00:20 localhost kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Nov 25 09:00:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 25 09:00:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 25 09:00:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 25 09:00:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Nov 25 09:00:20 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 25 09:00:20 localhost kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Nov 25 09:00:20 localhost kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Nov 25 09:00:20 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 25 09:00:20 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 25 09:00:20 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 25 09:00:20 localhost kernel: landlock: Up and running.
Nov 25 09:00:20 localhost kernel: Yama: becoming mindful.
Nov 25 09:00:20 localhost kernel: SELinux:  Initializing.
Nov 25 09:00:20 localhost kernel: LSM support for eBPF active
Nov 25 09:00:20 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 09:00:20 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 09:00:20 localhost kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Nov 25 09:00:20 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 25 09:00:20 localhost kernel: ... version:                0
Nov 25 09:00:20 localhost kernel: ... bit width:              48
Nov 25 09:00:20 localhost kernel: ... generic registers:      6
Nov 25 09:00:20 localhost kernel: ... value mask:             0000ffffffffffff
Nov 25 09:00:20 localhost kernel: ... max period:             00007fffffffffff
Nov 25 09:00:20 localhost kernel: ... fixed-purpose events:   0
Nov 25 09:00:20 localhost kernel: ... event mask:             000000000000003f
Nov 25 09:00:20 localhost kernel: signal: max sigframe size: 3376
Nov 25 09:00:20 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 25 09:00:20 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 25 09:00:20 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 25 09:00:20 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 25 09:00:20 localhost kernel: .... node  #0, CPUs:      #1 #2 #3
Nov 25 09:00:20 localhost kernel: smp: Brought up 1 node, 4 CPUs
Nov 25 09:00:20 localhost kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Nov 25 09:00:20 localhost kernel: node 0 deferred pages initialised in 7ms
Nov 25 09:00:20 localhost kernel: Memory: 7778824K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 604524K reserved, 0K cma-reserved)
Nov 25 09:00:20 localhost kernel: devtmpfs: initialized
Nov 25 09:00:20 localhost kernel: x86/mm: Memory block size: 128MB
Nov 25 09:00:20 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 25 09:00:20 localhost kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Nov 25 09:00:20 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 25 09:00:20 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 25 09:00:20 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 25 09:00:20 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 25 09:00:20 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 25 09:00:20 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 25 09:00:20 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 25 09:00:20 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 25 09:00:20 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 25 09:00:20 localhost kernel: audit: type=2000 audit(1764061220.219:1): state=initialized audit_enabled=0 res=1
Nov 25 09:00:20 localhost kernel: cpuidle: using governor menu
Nov 25 09:00:20 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 25 09:00:20 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Nov 25 09:00:20 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Nov 25 09:00:20 localhost kernel: PCI: Using configuration type 1 for base access
Nov 25 09:00:20 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 25 09:00:20 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 25 09:00:20 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 25 09:00:20 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 25 09:00:20 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 25 09:00:20 localhost kernel: Demotion targets for Node 0: null
Nov 25 09:00:20 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 25 09:00:20 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 25 09:00:20 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 25 09:00:20 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 25 09:00:20 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 25 09:00:20 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 25 09:00:20 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 25 09:00:20 localhost kernel: ACPI: Interpreter enabled
Nov 25 09:00:20 localhost kernel: ACPI: PM: (supports S0 S5)
Nov 25 09:00:20 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 25 09:00:20 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 25 09:00:20 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 25 09:00:20 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Nov 25 09:00:20 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 25 09:00:20 localhost kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 25 09:00:20 localhost kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Nov 25 09:00:20 localhost kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Nov 25 09:00:20 localhost kernel: PCI host bridge to bus 0000:00
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Nov 25 09:00:20 localhost kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 25 09:00:20 localhost kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Nov 25 09:00:20 localhost kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Nov 25 09:00:20 localhost kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Nov 25 09:00:20 localhost kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Nov 25 09:00:20 localhost kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Nov 25 09:00:20 localhost kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Nov 25 09:00:20 localhost kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Nov 25 09:00:20 localhost kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Nov 25 09:00:20 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 25 09:00:20 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Nov 25 09:00:20 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:02: extended config space not accessible
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [1] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [2] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [3] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [4] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [5] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [6] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [7] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [8] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [9] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [10] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [11] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [12] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [13] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [14] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [15] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [16] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [17] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [18] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [19] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [20] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [21] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [22] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [23] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [24] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [25] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [26] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [27] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [28] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [29] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [30] registered
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [31] registered
Nov 25 09:00:20 localhost kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 25 09:00:20 localhost kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Nov 25 09:00:20 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-2] registered
Nov 25 09:00:20 localhost kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Nov 25 09:00:20 localhost kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Nov 25 09:00:20 localhost kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-3] registered
Nov 25 09:00:20 localhost kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Nov 25 09:00:20 localhost kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Nov 25 09:00:20 localhost kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-4] registered
Nov 25 09:00:20 localhost kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Nov 25 09:00:20 localhost kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-5] registered
Nov 25 09:00:20 localhost kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Nov 25 09:00:20 localhost kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-6] registered
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-7] registered
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-8] registered
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-9] registered
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-10] registered
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-11] registered
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-12] registered
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-13] registered
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-14] registered
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-15] registered
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-16] registered
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 25 09:00:20 localhost kernel: acpiphp: Slot [0-17] registered
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Nov 25 09:00:20 localhost kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Nov 25 09:00:20 localhost kernel: iommu: Default domain type: Translated
Nov 25 09:00:20 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 25 09:00:20 localhost kernel: SCSI subsystem initialized
Nov 25 09:00:20 localhost kernel: ACPI: bus type USB registered
Nov 25 09:00:20 localhost kernel: usbcore: registered new interface driver usbfs
Nov 25 09:00:20 localhost kernel: usbcore: registered new interface driver hub
Nov 25 09:00:20 localhost kernel: usbcore: registered new device driver usb
Nov 25 09:00:20 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 25 09:00:20 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 25 09:00:20 localhost kernel: PTP clock support registered
Nov 25 09:00:20 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 25 09:00:20 localhost kernel: NetLabel: Initializing
Nov 25 09:00:20 localhost kernel: NetLabel:  domain hash size = 128
Nov 25 09:00:20 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 25 09:00:20 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 25 09:00:20 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 25 09:00:20 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 25 09:00:20 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 25 09:00:20 localhost kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Nov 25 09:00:20 localhost kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Nov 25 09:00:20 localhost kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 25 09:00:20 localhost kernel: vgaarb: loaded
Nov 25 09:00:20 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 25 09:00:20 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 25 09:00:20 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 25 09:00:20 localhost kernel: pnp: PnP ACPI init
Nov 25 09:00:20 localhost kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Nov 25 09:00:20 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 25 09:00:20 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 25 09:00:20 localhost kernel: NET: Registered PF_INET protocol family
Nov 25 09:00:20 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 25 09:00:20 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 25 09:00:20 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 25 09:00:20 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 25 09:00:20 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 25 09:00:20 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 25 09:00:20 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 25 09:00:20 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 09:00:20 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 09:00:20 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 25 09:00:20 localhost kernel: NET: Registered PF_XDP protocol family
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Nov 25 09:00:20 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 25 09:00:20 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Nov 25 09:00:20 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Nov 25 09:00:20 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Nov 25 09:00:20 localhost kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 25 09:00:20 localhost kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Nov 25 09:00:20 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 25 09:00:20 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 25 09:00:20 localhost kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Nov 25 09:00:20 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 25 09:00:20 localhost kernel: ACPI: bus type thunderbolt registered
Nov 25 09:00:20 localhost kernel: Initialise system trusted keyrings
Nov 25 09:00:20 localhost kernel: Key type blacklist registered
Nov 25 09:00:20 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 25 09:00:20 localhost kernel: zbud: loaded
Nov 25 09:00:20 localhost kernel: integrity: Platform Keyring initialized
Nov 25 09:00:20 localhost kernel: integrity: Machine keyring initialized
Nov 25 09:00:20 localhost kernel: Freeing initrd memory: 75160K
Nov 25 09:00:20 localhost kernel: NET: Registered PF_ALG protocol family
Nov 25 09:00:20 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 25 09:00:20 localhost kernel: Key type asymmetric registered
Nov 25 09:00:20 localhost kernel: Asymmetric key parser 'x509' registered
Nov 25 09:00:20 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 25 09:00:20 localhost kernel: io scheduler mq-deadline registered
Nov 25 09:00:20 localhost kernel: io scheduler kyber registered
Nov 25 09:00:20 localhost kernel: io scheduler bfq registered
Nov 25 09:00:20 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Nov 25 09:00:20 localhost kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Nov 25 09:00:20 localhost kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Nov 25 09:00:20 localhost kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Nov 25 09:00:20 localhost kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Nov 25 09:00:20 localhost kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Nov 25 09:00:20 localhost kernel: shpchp 0000:01:00.0: Slot initialization failed
Nov 25 09:00:20 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 25 09:00:20 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 25 09:00:20 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 25 09:00:20 localhost kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Nov 25 09:00:20 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 25 09:00:20 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 25 09:00:20 localhost kernel: Non-volatile memory driver v1.3
Nov 25 09:00:20 localhost kernel: rdac: device handler registered
Nov 25 09:00:20 localhost kernel: hp_sw: device handler registered
Nov 25 09:00:20 localhost kernel: emc: device handler registered
Nov 25 09:00:20 localhost kernel: alua: device handler registered
Nov 25 09:00:20 localhost kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Nov 25 09:00:20 localhost kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Nov 25 09:00:20 localhost kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Nov 25 09:00:20 localhost kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Nov 25 09:00:20 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 25 09:00:20 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 25 09:00:20 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 25 09:00:20 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 25 09:00:20 localhost kernel: usb usb1: SerialNumber: 0000:02:01.0
Nov 25 09:00:20 localhost kernel: hub 1-0:1.0: USB hub found
Nov 25 09:00:20 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 25 09:00:20 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 25 09:00:20 localhost kernel: usbserial: USB Serial support registered for generic
Nov 25 09:00:20 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 25 09:00:20 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 25 09:00:20 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 25 09:00:20 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 25 09:00:20 localhost kernel: rtc_cmos 00:03: RTC can wake from S4
Nov 25 09:00:20 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 25 09:00:20 localhost kernel: rtc_cmos 00:03: registered as rtc0
Nov 25 09:00:20 localhost kernel: rtc_cmos 00:03: setting system clock to 2025-11-25T09:00:20 UTC (1764061220)
Nov 25 09:00:20 localhost kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Nov 25 09:00:20 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 25 09:00:20 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 25 09:00:20 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 25 09:00:20 localhost kernel: usbcore: registered new interface driver usbhid
Nov 25 09:00:20 localhost kernel: usbhid: USB HID core driver
Nov 25 09:00:20 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 25 09:00:20 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 25 09:00:20 localhost kernel: Initializing XFRM netlink socket
Nov 25 09:00:20 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 25 09:00:20 localhost kernel: Segment Routing with IPv6
Nov 25 09:00:20 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 25 09:00:20 localhost kernel: mpls_gso: MPLS GSO support
Nov 25 09:00:20 localhost kernel: IPI shorthand broadcast: enabled
Nov 25 09:00:20 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 25 09:00:20 localhost kernel: AES CTR mode by8 optimization enabled
Nov 25 09:00:20 localhost kernel: sched_clock: Marking stable (1143002257, 142171336)->(1361054747, -75881154)
Nov 25 09:00:20 localhost kernel: registered taskstats version 1
Nov 25 09:00:20 localhost kernel: Loading compiled-in X.509 certificates
Nov 25 09:00:20 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 09:00:20 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 25 09:00:20 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 25 09:00:20 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 25 09:00:20 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 25 09:00:20 localhost kernel: Demotion targets for Node 0: null
Nov 25 09:00:20 localhost kernel: page_owner is disabled
Nov 25 09:00:20 localhost kernel: Key type .fscrypt registered
Nov 25 09:00:20 localhost kernel: Key type fscrypt-provisioning registered
Nov 25 09:00:20 localhost kernel: Key type big_key registered
Nov 25 09:00:20 localhost kernel: Key type encrypted registered
Nov 25 09:00:20 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 25 09:00:20 localhost kernel: Loading compiled-in module X.509 certificates
Nov 25 09:00:20 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 09:00:20 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 25 09:00:20 localhost kernel: ima: No architecture policies found
Nov 25 09:00:20 localhost kernel: evm: Initialising EVM extended attributes:
Nov 25 09:00:20 localhost kernel: evm: security.selinux
Nov 25 09:00:20 localhost kernel: evm: security.SMACK64 (disabled)
Nov 25 09:00:20 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 25 09:00:20 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 25 09:00:20 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 25 09:00:20 localhost kernel: evm: security.apparmor (disabled)
Nov 25 09:00:20 localhost kernel: evm: security.ima
Nov 25 09:00:20 localhost kernel: evm: security.capability
Nov 25 09:00:20 localhost kernel: evm: HMAC attrs: 0x1
Nov 25 09:00:20 localhost kernel: Running certificate verification RSA selftest
Nov 25 09:00:20 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 25 09:00:20 localhost kernel: Running certificate verification ECDSA selftest
Nov 25 09:00:20 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 25 09:00:20 localhost kernel: clk: Disabling unused clocks
Nov 25 09:00:20 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 25 09:00:20 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 25 09:00:20 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 25 09:00:20 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 25 09:00:20 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 25 09:00:20 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 25 09:00:20 localhost kernel: Run /init as init process
Nov 25 09:00:20 localhost kernel:   with arguments:
Nov 25 09:00:20 localhost kernel:     /init
Nov 25 09:00:20 localhost kernel:   with environment:
Nov 25 09:00:20 localhost kernel:     HOME=/
Nov 25 09:00:20 localhost kernel:     TERM=linux
Nov 25 09:00:20 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 25 09:00:20 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 09:00:20 localhost systemd[1]: Detected virtualization kvm.
Nov 25 09:00:20 localhost systemd[1]: Detected architecture x86-64.
Nov 25 09:00:20 localhost systemd[1]: Running in initrd.
Nov 25 09:00:20 localhost systemd[1]: No hostname configured, using default hostname.
Nov 25 09:00:20 localhost systemd[1]: Hostname set to <localhost>.
Nov 25 09:00:20 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 25 09:00:20 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 25 09:00:20 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 09:00:20 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 25 09:00:20 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 25 09:00:20 localhost systemd[1]: Reached target Local File Systems.
Nov 25 09:00:20 localhost systemd[1]: Reached target Path Units.
Nov 25 09:00:20 localhost systemd[1]: Reached target Slice Units.
Nov 25 09:00:20 localhost systemd[1]: Reached target Swaps.
Nov 25 09:00:20 localhost systemd[1]: Reached target Timer Units.
Nov 25 09:00:20 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 09:00:20 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 25 09:00:20 localhost systemd[1]: Listening on Journal Socket.
Nov 25 09:00:20 localhost systemd[1]: Listening on udev Control Socket.
Nov 25 09:00:20 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 25 09:00:20 localhost systemd[1]: Reached target Socket Units.
Nov 25 09:00:20 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 25 09:00:20 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 25 09:00:20 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 25 09:00:20 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 25 09:00:20 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 25 09:00:20 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Nov 25 09:00:20 localhost systemd[1]: Starting Journal Service...
Nov 25 09:00:20 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 09:00:20 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 25 09:00:20 localhost systemd[1]: Starting Create System Users...
Nov 25 09:00:20 localhost systemd[1]: Starting Setup Virtual Console...
Nov 25 09:00:20 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 25 09:00:20 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Nov 25 09:00:20 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 09:00:20 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 25 09:00:20 localhost systemd-journald[281]: Journal started
Nov 25 09:00:20 localhost systemd-journald[281]: Runtime Journal (/run/log/journal/0f2c6148bac340499f53233f21cb16c0) is 8.0M, max 153.6M, 145.6M free.
Nov 25 09:00:20 localhost systemd[1]: Started Journal Service.
Nov 25 09:00:20 localhost systemd-sysusers[284]: Creating group 'users' with GID 100.
Nov 25 09:00:20 localhost systemd-sysusers[284]: Creating group 'dbus' with GID 81.
Nov 25 09:00:20 localhost systemd-sysusers[284]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 25 09:00:20 localhost systemd[1]: Finished Create System Users.
Nov 25 09:00:20 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 09:00:20 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 09:00:21 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 09:00:21 localhost systemd[1]: Finished Setup Virtual Console.
Nov 25 09:00:21 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 25 09:00:21 localhost systemd[1]: Starting dracut cmdline hook...
Nov 25 09:00:21 localhost dracut-cmdline[300]: dracut-9 dracut-057-102.git20250818.el9
Nov 25 09:00:21 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 09:00:21 localhost dracut-cmdline[300]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 09:00:21 localhost systemd[1]: Finished dracut cmdline hook.
Nov 25 09:00:21 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 25 09:00:21 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 25 09:00:21 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 25 09:00:21 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 25 09:00:21 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 25 09:00:21 localhost kernel: RPC: Registered udp transport module.
Nov 25 09:00:21 localhost kernel: RPC: Registered tcp transport module.
Nov 25 09:00:21 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 25 09:00:21 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 25 09:00:21 localhost rpc.statd[415]: Version 2.5.4 starting
Nov 25 09:00:21 localhost rpc.statd[415]: Initializing NSM state
Nov 25 09:00:21 localhost rpc.idmapd[420]: Setting log level to 0
Nov 25 09:00:21 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 25 09:00:21 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 09:00:21 localhost systemd-udevd[433]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 09:00:21 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 09:00:21 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 25 09:00:21 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 25 09:00:21 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 25 09:00:21 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 25 09:00:21 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 09:00:21 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 25 09:00:21 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 09:00:21 localhost systemd[1]: Reached target Network.
Nov 25 09:00:21 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 09:00:21 localhost systemd[1]: Starting dracut initqueue hook...
Nov 25 09:00:21 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 09:00:21 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 09:00:21 localhost kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Nov 25 09:00:21 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 25 09:00:21 localhost kernel:  vda: vda1
Nov 25 09:00:21 localhost kernel: libata version 3.00 loaded.
Nov 25 09:00:21 localhost systemd-udevd[437]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:00:21 localhost kernel: ahci 0000:00:1f.2: version 3.0
Nov 25 09:00:21 localhost kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Nov 25 09:00:21 localhost kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Nov 25 09:00:21 localhost kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Nov 25 09:00:21 localhost kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Nov 25 09:00:21 localhost kernel: scsi host0: ahci
Nov 25 09:00:21 localhost kernel: scsi host1: ahci
Nov 25 09:00:21 localhost kernel: scsi host2: ahci
Nov 25 09:00:21 localhost kernel: scsi host3: ahci
Nov 25 09:00:21 localhost kernel: scsi host4: ahci
Nov 25 09:00:21 localhost kernel: scsi host5: ahci
Nov 25 09:00:21 localhost kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 49 lpm-pol 0
Nov 25 09:00:21 localhost kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 49 lpm-pol 0
Nov 25 09:00:21 localhost kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 49 lpm-pol 0
Nov 25 09:00:21 localhost kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 49 lpm-pol 0
Nov 25 09:00:21 localhost kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 49 lpm-pol 0
Nov 25 09:00:21 localhost kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 49 lpm-pol 0
Nov 25 09:00:21 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 09:00:21 localhost systemd[1]: Reached target Initrd Root Device.
Nov 25 09:00:21 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 25 09:00:21 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 25 09:00:21 localhost systemd[1]: Reached target System Initialization.
Nov 25 09:00:21 localhost systemd[1]: Reached target Basic System.
Nov 25 09:00:21 localhost kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Nov 25 09:00:21 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 25 09:00:21 localhost kernel: ata1.00: applying bridge limits
Nov 25 09:00:21 localhost kernel: ata1.00: configured for UDMA/100
Nov 25 09:00:21 localhost kernel: ata2: SATA link down (SStatus 0 SControl 300)
Nov 25 09:00:21 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 25 09:00:21 localhost kernel: ata4: SATA link down (SStatus 0 SControl 300)
Nov 25 09:00:21 localhost kernel: ata3: SATA link down (SStatus 0 SControl 300)
Nov 25 09:00:21 localhost kernel: ata6: SATA link down (SStatus 0 SControl 300)
Nov 25 09:00:21 localhost kernel: ata5: SATA link down (SStatus 0 SControl 300)
Nov 25 09:00:21 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 25 09:00:21 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 25 09:00:21 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 25 09:00:22 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 25 09:00:22 localhost systemd[1]: Finished dracut initqueue hook.
Nov 25 09:00:22 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 09:00:22 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 25 09:00:22 localhost systemd[1]: Reached target Remote File Systems.
Nov 25 09:00:22 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 25 09:00:22 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 25 09:00:22 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 25 09:00:22 localhost systemd-fsck[527]: /usr/sbin/fsck.xfs: XFS file system.
Nov 25 09:00:22 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 09:00:22 localhost systemd[1]: Mounting /sysroot...
Nov 25 09:00:22 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 25 09:00:22 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 25 09:00:22 localhost kernel: XFS (vda1): Ending clean mount
Nov 25 09:00:22 localhost systemd[1]: Mounted /sysroot.
Nov 25 09:00:22 localhost systemd[1]: Reached target Initrd Root File System.
Nov 25 09:00:22 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 25 09:00:22 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 25 09:00:22 localhost systemd[1]: Reached target Initrd File Systems.
Nov 25 09:00:22 localhost systemd[1]: Reached target Initrd Default Target.
Nov 25 09:00:22 localhost systemd[1]: Starting dracut mount hook...
Nov 25 09:00:22 localhost systemd[1]: Finished dracut mount hook.
Nov 25 09:00:22 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 25 09:00:22 localhost rpc.idmapd[420]: exiting on signal 15
Nov 25 09:00:22 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 25 09:00:22 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 25 09:00:22 localhost systemd[1]: Stopped target Network.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Timer Units.
Nov 25 09:00:22 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 25 09:00:22 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Basic System.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Path Units.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Remote File Systems.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Slice Units.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Socket Units.
Nov 25 09:00:22 localhost systemd[1]: Stopped target System Initialization.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Local File Systems.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Swaps.
Nov 25 09:00:22 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped dracut mount hook.
Nov 25 09:00:22 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 25 09:00:22 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 25 09:00:22 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 25 09:00:22 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 25 09:00:22 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 25 09:00:22 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 25 09:00:22 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 25 09:00:22 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 25 09:00:22 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 25 09:00:22 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 25 09:00:22 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 25 09:00:22 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 25 09:00:22 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Closed udev Control Socket.
Nov 25 09:00:22 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Closed udev Kernel Socket.
Nov 25 09:00:22 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 25 09:00:22 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 25 09:00:22 localhost systemd[1]: Starting Cleanup udev Database...
Nov 25 09:00:22 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 25 09:00:22 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 25 09:00:22 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Stopped Create System Users.
Nov 25 09:00:22 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 25 09:00:22 localhost systemd[1]: Finished Cleanup udev Database.
Nov 25 09:00:22 localhost systemd[1]: Reached target Switch Root.
Nov 25 09:00:22 localhost systemd[1]: Starting Switch Root...
Nov 25 09:00:22 localhost systemd[1]: Switching root.
Nov 25 09:00:22 localhost systemd-journald[281]: Journal stopped
Nov 25 09:00:23 localhost systemd-journald[281]: Received SIGTERM from PID 1 (systemd).
Nov 25 09:00:23 localhost kernel: audit: type=1404 audit(1764061222.851:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 25 09:00:23 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:00:23 localhost kernel: SELinux:  policy capability open_perms=1
Nov 25 09:00:23 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:00:23 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:00:23 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:00:23 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:00:23 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:00:23 localhost kernel: audit: type=1403 audit(1764061222.979:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 25 09:00:23 localhost systemd[1]: Successfully loaded SELinux policy in 133.381ms.
Nov 25 09:00:23 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.597ms.
Nov 25 09:00:23 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 09:00:23 localhost systemd[1]: Detected virtualization kvm.
Nov 25 09:00:23 localhost systemd[1]: Detected architecture x86-64.
Nov 25 09:00:23 localhost systemd-rc-local-generator[606]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:00:23 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 25 09:00:23 localhost systemd[1]: Stopped Switch Root.
Nov 25 09:00:23 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 25 09:00:23 localhost systemd[1]: Created slice Slice /system/getty.
Nov 25 09:00:23 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 25 09:00:23 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 25 09:00:23 localhost systemd[1]: Created slice User and Session Slice.
Nov 25 09:00:23 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 09:00:23 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 25 09:00:23 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 25 09:00:23 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 25 09:00:23 localhost systemd[1]: Stopped target Switch Root.
Nov 25 09:00:23 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 25 09:00:23 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 25 09:00:23 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 25 09:00:23 localhost systemd[1]: Reached target Path Units.
Nov 25 09:00:23 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 25 09:00:23 localhost systemd[1]: Reached target Slice Units.
Nov 25 09:00:23 localhost systemd[1]: Reached target Swaps.
Nov 25 09:00:23 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 25 09:00:23 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 25 09:00:23 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 25 09:00:23 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 25 09:00:23 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 25 09:00:23 localhost systemd[1]: Listening on udev Control Socket.
Nov 25 09:00:23 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 25 09:00:23 localhost systemd[1]: Mounting Huge Pages File System...
Nov 25 09:00:23 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 25 09:00:23 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 25 09:00:23 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 25 09:00:23 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 09:00:23 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 25 09:00:23 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 09:00:23 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 25 09:00:23 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 25 09:00:23 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 25 09:00:23 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 25 09:00:23 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 25 09:00:23 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 25 09:00:23 localhost systemd[1]: Stopped Journal Service.
Nov 25 09:00:23 localhost systemd[1]: Starting Journal Service...
Nov 25 09:00:23 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 09:00:23 localhost kernel: fuse: init (API version 7.37)
Nov 25 09:00:23 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 25 09:00:23 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 09:00:23 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 25 09:00:23 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 25 09:00:23 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 25 09:00:23 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 25 09:00:23 localhost systemd[1]: Mounted Huge Pages File System.
Nov 25 09:00:23 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 25 09:00:23 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 25 09:00:23 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 25 09:00:23 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 09:00:23 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 25 09:00:23 localhost kernel: ACPI: bus type drm_connector registered
Nov 25 09:00:23 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 09:00:23 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 09:00:23 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 25 09:00:23 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 25 09:00:23 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 25 09:00:23 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 25 09:00:23 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 25 09:00:23 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 25 09:00:23 localhost systemd-journald[647]: Journal started
Nov 25 09:00:23 localhost systemd-journald[647]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 09:00:23 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 25 09:00:23 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 25 09:00:23 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 25 09:00:23 localhost systemd[1]: Started Journal Service.
Nov 25 09:00:23 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 25 09:00:23 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 25 09:00:23 localhost systemd[1]: Mounting FUSE Control File System...
Nov 25 09:00:23 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 09:00:23 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 25 09:00:23 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 25 09:00:23 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 25 09:00:23 localhost systemd-journald[647]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 09:00:23 localhost systemd-journald[647]: Received client request to flush runtime journal.
Nov 25 09:00:23 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 25 09:00:23 localhost systemd[1]: Starting Create System Users...
Nov 25 09:00:23 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 25 09:00:23 localhost systemd[1]: Mounted FUSE Control File System.
Nov 25 09:00:23 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 25 09:00:23 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 25 09:00:23 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 09:00:23 localhost systemd[1]: Finished Create System Users.
Nov 25 09:00:23 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 09:00:23 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 25 09:00:23 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 09:00:23 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 25 09:00:23 localhost systemd[1]: Reached target Local File Systems.
Nov 25 09:00:23 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 25 09:00:23 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 25 09:00:23 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 25 09:00:23 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 25 09:00:23 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 25 09:00:23 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 25 09:00:23 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 09:00:23 localhost bootctl[665]: Couldn't find EFI system partition, skipping.
Nov 25 09:00:23 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 25 09:00:23 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 09:00:23 localhost systemd[1]: Starting Security Auditing Service...
Nov 25 09:00:23 localhost systemd[1]: Starting RPC Bind...
Nov 25 09:00:23 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 25 09:00:23 localhost auditd[671]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 25 09:00:23 localhost auditd[671]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 25 09:00:23 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 25 09:00:23 localhost systemd[1]: Started RPC Bind.
Nov 25 09:00:23 localhost augenrules[676]: /sbin/augenrules: No change
Nov 25 09:00:23 localhost augenrules[691]: No rules
Nov 25 09:00:23 localhost augenrules[691]: enabled 1
Nov 25 09:00:23 localhost augenrules[691]: failure 1
Nov 25 09:00:23 localhost augenrules[691]: pid 671
Nov 25 09:00:23 localhost augenrules[691]: rate_limit 0
Nov 25 09:00:23 localhost augenrules[691]: backlog_limit 8192
Nov 25 09:00:23 localhost augenrules[691]: lost 0
Nov 25 09:00:23 localhost augenrules[691]: backlog 0
Nov 25 09:00:23 localhost augenrules[691]: backlog_wait_time 60000
Nov 25 09:00:23 localhost augenrules[691]: backlog_wait_time_actual 0
Nov 25 09:00:23 localhost augenrules[691]: enabled 1
Nov 25 09:00:23 localhost augenrules[691]: failure 1
Nov 25 09:00:23 localhost augenrules[691]: pid 671
Nov 25 09:00:23 localhost augenrules[691]: rate_limit 0
Nov 25 09:00:23 localhost augenrules[691]: backlog_limit 8192
Nov 25 09:00:23 localhost augenrules[691]: lost 0
Nov 25 09:00:23 localhost augenrules[691]: backlog 3
Nov 25 09:00:23 localhost augenrules[691]: backlog_wait_time 60000
Nov 25 09:00:23 localhost augenrules[691]: backlog_wait_time_actual 0
Nov 25 09:00:23 localhost augenrules[691]: enabled 1
Nov 25 09:00:23 localhost augenrules[691]: failure 1
Nov 25 09:00:23 localhost augenrules[691]: pid 671
Nov 25 09:00:23 localhost augenrules[691]: rate_limit 0
Nov 25 09:00:23 localhost augenrules[691]: backlog_limit 8192
Nov 25 09:00:23 localhost augenrules[691]: lost 0
Nov 25 09:00:23 localhost augenrules[691]: backlog 2
Nov 25 09:00:23 localhost augenrules[691]: backlog_wait_time 60000
Nov 25 09:00:23 localhost augenrules[691]: backlog_wait_time_actual 0
Nov 25 09:00:23 localhost systemd[1]: Started Security Auditing Service.
Nov 25 09:00:23 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 25 09:00:23 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 25 09:00:23 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 25 09:00:23 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 25 09:00:23 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 09:00:23 localhost systemd[1]: Starting Update is Completed...
Nov 25 09:00:23 localhost systemd[1]: Finished Update is Completed.
Nov 25 09:00:23 localhost systemd-udevd[699]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 09:00:23 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 09:00:23 localhost systemd[1]: Reached target System Initialization.
Nov 25 09:00:23 localhost systemd[1]: Started dnf makecache --timer.
Nov 25 09:00:23 localhost systemd[1]: Started Daily rotation of log files.
Nov 25 09:00:23 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 25 09:00:23 localhost systemd[1]: Reached target Timer Units.
Nov 25 09:00:23 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 09:00:23 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 25 09:00:23 localhost systemd[1]: Reached target Socket Units.
Nov 25 09:00:23 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 25 09:00:23 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 09:00:23 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 09:00:23 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 25 09:00:23 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 09:00:23 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 09:00:23 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 25 09:00:23 localhost systemd[1]: Reached target Basic System.
Nov 25 09:00:23 localhost dbus-broker-lau[722]: Ready
Nov 25 09:00:23 localhost systemd-udevd[707]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:00:23 localhost systemd[1]: Starting NTP client/server...
Nov 25 09:00:23 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 25 09:00:23 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 25 09:00:23 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 25 09:00:23 localhost systemd[1]: Started irqbalance daemon.
Nov 25 09:00:23 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 25 09:00:23 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:00:23 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:00:23 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:00:23 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 25 09:00:23 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 25 09:00:23 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 25 09:00:23 localhost systemd[1]: Starting User Login Management...
Nov 25 09:00:23 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 25 09:00:24 localhost chronyd[752]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 09:00:24 localhost chronyd[752]: Loaded 0 symmetric keys
Nov 25 09:00:24 localhost chronyd[752]: Using right/UTC timezone to obtain leap second data
Nov 25 09:00:24 localhost chronyd[752]: Loaded seccomp filter (level 2)
Nov 25 09:00:24 localhost systemd[1]: Started NTP client/server.
Nov 25 09:00:24 localhost systemd-logind[744]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 09:00:24 localhost systemd-logind[744]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 09:00:24 localhost systemd-logind[744]: New seat seat0.
Nov 25 09:00:24 localhost systemd[1]: Started User Login Management.
Nov 25 09:00:24 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 25 09:00:24 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 25 09:00:24 localhost kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Nov 25 09:00:24 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 25 09:00:24 localhost kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Nov 25 09:00:24 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 25 09:00:24 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 25 09:00:24 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Nov 25 09:00:24 localhost kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Nov 25 09:00:24 localhost kernel: Console: switching to colour dummy device 80x25
Nov 25 09:00:24 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 25 09:00:24 localhost kernel: [drm] features: -context_init
Nov 25 09:00:24 localhost iptables.init[738]: iptables: Applying firewall rules: [  OK  ]
Nov 25 09:00:24 localhost kernel: [drm] number of scanouts: 1
Nov 25 09:00:24 localhost kernel: [drm] number of cap sets: 0
Nov 25 09:00:24 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 25 09:00:24 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Nov 25 09:00:24 localhost kernel: iTCO_vendor_support: vendor-support=0
Nov 25 09:00:24 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 25 09:00:24 localhost kernel: Console: switching to colour frame buffer device 160x50
Nov 25 09:00:24 localhost kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Nov 25 09:00:24 localhost kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 25 09:00:24 localhost kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Nov 25 09:00:24 localhost kernel: kvm_amd: TSC scaling supported
Nov 25 09:00:24 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 25 09:00:24 localhost kernel: kvm_amd: Nested Paging enabled
Nov 25 09:00:24 localhost kernel: kvm_amd: LBR virtualization supported
Nov 25 09:00:24 localhost kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Nov 25 09:00:24 localhost kernel: kvm_amd: Virtual GIF supported
Nov 25 09:00:24 localhost cloud-init[791]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 25 Nov 2025 09:00:24 +0000. Up 5.05 seconds.
Nov 25 09:00:24 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 25 09:00:24 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 25 09:00:24 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp_2jo9uyu.mount: Deactivated successfully.
Nov 25 09:00:24 localhost systemd[1]: Starting Hostname Service...
Nov 25 09:00:24 localhost systemd[1]: Started Hostname Service.
Nov 25 09:00:24 np0005534694 systemd-hostnamed[805]: Hostname set to <np0005534694> (static)
Nov 25 09:00:24 np0005534694 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 25 09:00:24 np0005534694 systemd[1]: Reached target Preparation for Network.
Nov 25 09:00:24 np0005534694 systemd[1]: Starting Network Manager...
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8495] NetworkManager (version 1.54.1-1.el9) is starting... (boot:f8c189f2-455d-46a5-8a09-714641cd81d1)
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8500] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8587] manager[0x55ff5290a080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8620] hostname: hostname: using hostnamed
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8620] hostname: static hostname changed from (none) to "np0005534694"
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8625] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8704] manager[0x55ff5290a080]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8705] manager[0x55ff5290a080]: rfkill: WWAN hardware radio set enabled
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8758] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8759] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8760] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8760] manager: Networking is enabled by state file
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8764] settings: Loaded settings plugin: keyfile (internal)
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8783] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 09:00:24 np0005534694 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8825] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8843] dhcp: init: Using DHCP client 'internal'
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8845] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8860] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8870] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8879] device (lo): Activation: starting connection 'lo' (28ef2950-3e02-469b-b897-6f6f0a688c29)
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8889] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8893] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:00:24 np0005534694 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:00:24 np0005534694 systemd[1]: Started Network Manager.
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8940] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 09:00:24 np0005534694 systemd[1]: Reached target Network.
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8956] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8960] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8965] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8969] device (eth0): carrier: link connected
Nov 25 09:00:24 np0005534694 systemd[1]: Starting Network Manager Wait Online...
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8981] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8990] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8995] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8999] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.8999] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.9000] manager: NetworkManager state is now CONNECTING
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.9001] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.9007] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:00:24 np0005534694 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.9023] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.9029] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.9068] dhcp4 (eth0): state changed new lease, address=192.168.26.109
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.9075] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 09:00:24 np0005534694 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.9147] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.9153] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 09:00:24 np0005534694 NetworkManager[809]: <info>  [1764061224.9163] device (lo): Activation: successful, device activated.
Nov 25 09:00:24 np0005534694 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 25 09:00:24 np0005534694 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 09:00:24 np0005534694 systemd[1]: Reached target NFS client services.
Nov 25 09:00:24 np0005534694 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 09:00:24 np0005534694 systemd[1]: Reached target Remote File Systems.
Nov 25 09:00:24 np0005534694 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 09:00:26 np0005534694 NetworkManager[809]: <info>  [1764061226.8159] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:00:27 np0005534694 NetworkManager[809]: <info>  [1764061227.8412] dhcp6 (eth0): state changed new lease, address=2001:db8::1c1
Nov 25 09:00:29 np0005534694 NetworkManager[809]: <info>  [1764061229.6960] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:00:29 np0005534694 NetworkManager[809]: <info>  [1764061229.6992] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:00:29 np0005534694 NetworkManager[809]: <info>  [1764061229.6995] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:00:29 np0005534694 NetworkManager[809]: <info>  [1764061229.7001] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 09:00:29 np0005534694 NetworkManager[809]: <info>  [1764061229.7011] device (eth0): Activation: successful, device activated.
Nov 25 09:00:29 np0005534694 NetworkManager[809]: <info>  [1764061229.7016] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 09:00:29 np0005534694 NetworkManager[809]: <info>  [1764061229.7023] manager: startup complete
Nov 25 09:00:29 np0005534694 systemd[1]: Finished Network Manager Wait Online.
Nov 25 09:00:29 np0005534694 systemd[1]: Starting Cloud-init: Network Stage...
Nov 25 09:00:29 np0005534694 chronyd[752]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Nov 25 09:00:29 np0005534694 chronyd[752]: System clock TAI offset set to 37 seconds
Nov 25 09:00:29 np0005534694 cloud-init[876]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 25 Nov 2025 09:00:29 +0000. Up 10.57 seconds.
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |  eth0  | True |        192.168.26.109        | 255.255.255.0 | global | fa:16:3e:1d:24:fe |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |  eth0  | True |      2001:db8::1c1/128       |       .       | global | fa:16:3e:1d:24:fe |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |  eth0  | True | fe80::f816:3eff:fe1d:24fe/64 |       .       |  link  | fa:16:3e:1d:24:fe |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |   2   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: ++++++++++++++++++++++Route IPv6 info++++++++++++++++++++++
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: +-------+---------------+-------------+-----------+-------+
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: | Route |  Destination  |   Gateway   | Interface | Flags |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: +-------+---------------+-------------+-----------+-------+
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |   1   |  2001:db8::1  |      ::     |    eth0   |   U   |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |   2   | 2001:db8::1c1 |      ::     |    eth0   |   U   |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |   3   |   fe80::/64   |      ::     |    eth0   |   U   |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |   4   |      ::/0     | 2001:db8::1 |    eth0   |   UG  |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |   6   |     local     |      ::     |    eth0   |   U   |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |   7   |     local     |      ::     |    eth0   |   U   |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: |   8   |   multicast   |      ::     |    eth0   |   U   |
Nov 25 09:00:30 np0005534694 cloud-init[876]: ci-info: +-------+---------------+-------------+-----------+-------+
Nov 25 09:00:30 np0005534694 useradd[943]: new group: name=cloud-user, GID=1001
Nov 25 09:00:30 np0005534694 useradd[943]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 25 09:00:30 np0005534694 useradd[943]: add 'cloud-user' to group 'adm'
Nov 25 09:00:30 np0005534694 useradd[943]: add 'cloud-user' to group 'systemd-journal'
Nov 25 09:00:30 np0005534694 useradd[943]: add 'cloud-user' to shadow group 'adm'
Nov 25 09:00:30 np0005534694 useradd[943]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 25 09:00:30 np0005534694 cloud-init[876]: Generating public/private rsa key pair.
Nov 25 09:00:30 np0005534694 cloud-init[876]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 25 09:00:30 np0005534694 cloud-init[876]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 25 09:00:30 np0005534694 cloud-init[876]: The key fingerprint is:
Nov 25 09:00:30 np0005534694 cloud-init[876]: SHA256:KDCHE8IGBXyznof4SkKPJrq6fw9Xkc8RdN1iRfhCV2g root@np0005534694
Nov 25 09:00:30 np0005534694 cloud-init[876]: The key's randomart image is:
Nov 25 09:00:30 np0005534694 cloud-init[876]: +---[RSA 3072]----+
Nov 25 09:00:30 np0005534694 cloud-init[876]: |Boo      .o .. *=|
Nov 25 09:00:30 np0005534694 cloud-init[876]: | = =     . o  E o|
Nov 25 09:00:30 np0005534694 cloud-init[876]: |. * +   o .  + + |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |   *   . + .  . .|
Nov 25 09:00:30 np0005534694 cloud-init[876]: | .o + . S o    . |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |..o+ o .         |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |o+..o .          |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |*  ..o           |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |*=o. ..          |
Nov 25 09:00:30 np0005534694 cloud-init[876]: +----[SHA256]-----+
Nov 25 09:00:30 np0005534694 cloud-init[876]: Generating public/private ecdsa key pair.
Nov 25 09:00:30 np0005534694 cloud-init[876]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 25 09:00:30 np0005534694 cloud-init[876]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 25 09:00:30 np0005534694 cloud-init[876]: The key fingerprint is:
Nov 25 09:00:30 np0005534694 cloud-init[876]: SHA256:erGBkgjld1974i8yxtIJ39WUWBesUJD3WsiNYc/HFm8 root@np0005534694
Nov 25 09:00:30 np0005534694 cloud-init[876]: The key's randomart image is:
Nov 25 09:00:30 np0005534694 cloud-init[876]: +---[ECDSA 256]---+
Nov 25 09:00:30 np0005534694 cloud-init[876]: |  .        .+....|
Nov 25 09:00:30 np0005534694 cloud-init[876]: | o         o +.o.|
Nov 25 09:00:30 np0005534694 cloud-init[876]: |. . . .   . =oX+o|
Nov 25 09:00:30 np0005534694 cloud-init[876]: | . o o o . ..=o*E|
Nov 25 09:00:30 np0005534694 cloud-init[876]: |  . o . S o .ooo.|
Nov 25 09:00:30 np0005534694 cloud-init[876]: |     ... = o...  |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |      .=oo..     |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |      ..O o.     |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |       o o ..    |
Nov 25 09:00:30 np0005534694 cloud-init[876]: +----[SHA256]-----+
Nov 25 09:00:30 np0005534694 cloud-init[876]: Generating public/private ed25519 key pair.
Nov 25 09:00:30 np0005534694 cloud-init[876]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 25 09:00:30 np0005534694 cloud-init[876]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 25 09:00:30 np0005534694 cloud-init[876]: The key fingerprint is:
Nov 25 09:00:30 np0005534694 cloud-init[876]: SHA256:zOe/8IOWTyl4PtL7Fi+WxeE5Mc0lUL2tjmdwBA9wx64 root@np0005534694
Nov 25 09:00:30 np0005534694 cloud-init[876]: The key's randomart image is:
Nov 25 09:00:30 np0005534694 cloud-init[876]: +--[ED25519 256]--+
Nov 25 09:00:30 np0005534694 cloud-init[876]: |          ..o+o. |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |           .o.o o|
Nov 25 09:00:30 np0005534694 cloud-init[876]: |             = ++|
Nov 25 09:00:30 np0005534694 cloud-init[876]: |       o      B.+|
Nov 25 09:00:30 np0005534694 cloud-init[876]: |        S .  = * |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |         +  E.O  |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |        ..=ooX . |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |        .o**B =  |
Nov 25 09:00:30 np0005534694 cloud-init[876]: |         ooB**   |
Nov 25 09:00:30 np0005534694 cloud-init[876]: +----[SHA256]-----+
Nov 25 09:00:31 np0005534694 systemd[1]: Finished Cloud-init: Network Stage.
Nov 25 09:00:31 np0005534694 systemd[1]: Reached target Cloud-config availability.
Nov 25 09:00:31 np0005534694 systemd[1]: Reached target Network is Online.
Nov 25 09:00:31 np0005534694 systemd[1]: Starting Cloud-init: Config Stage...
Nov 25 09:00:31 np0005534694 systemd[1]: Starting Crash recovery kernel arming...
Nov 25 09:00:31 np0005534694 systemd[1]: Starting Notify NFS peers of a restart...
Nov 25 09:00:31 np0005534694 systemd[1]: Starting System Logging Service...
Nov 25 09:00:31 np0005534694 systemd[1]: Starting OpenSSH server daemon...
Nov 25 09:00:31 np0005534694 sm-notify[960]: Version 2.5.4 starting
Nov 25 09:00:31 np0005534694 systemd[1]: Starting Permit User Sessions...
Nov 25 09:00:31 np0005534694 sshd[962]: Server listening on 0.0.0.0 port 22.
Nov 25 09:00:31 np0005534694 sshd[962]: Server listening on :: port 22.
Nov 25 09:00:31 np0005534694 systemd[1]: Started OpenSSH server daemon.
Nov 25 09:00:31 np0005534694 systemd[1]: Started Notify NFS peers of a restart.
Nov 25 09:00:31 np0005534694 systemd[1]: Finished Permit User Sessions.
Nov 25 09:00:31 np0005534694 systemd[1]: Started Command Scheduler.
Nov 25 09:00:31 np0005534694 systemd[1]: Started Getty on tty1.
Nov 25 09:00:31 np0005534694 crond[965]: (CRON) STARTUP (1.5.7)
Nov 25 09:00:31 np0005534694 crond[965]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 25 09:00:31 np0005534694 crond[965]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 9% if used.)
Nov 25 09:00:31 np0005534694 crond[965]: (CRON) INFO (running with inotify support)
Nov 25 09:00:31 np0005534694 systemd[1]: Started Serial Getty on ttyS0.
Nov 25 09:00:31 np0005534694 systemd[1]: Reached target Login Prompts.
Nov 25 09:00:31 np0005534694 systemd[1]: Started System Logging Service.
Nov 25 09:00:31 np0005534694 rsyslogd[961]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="961" x-info="https://www.rsyslog.com"] start
Nov 25 09:00:31 np0005534694 rsyslogd[961]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 25 09:00:31 np0005534694 systemd[1]: Reached target Multi-User System.
Nov 25 09:00:31 np0005534694 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 25 09:00:31 np0005534694 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 25 09:00:31 np0005534694 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 25 09:00:31 np0005534694 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:00:31 np0005534694 kdumpctl[973]: kdump: No kdump initial ramdisk found.
Nov 25 09:00:31 np0005534694 kdumpctl[973]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 25 09:00:31 np0005534694 cloud-init[1094]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 25 Nov 2025 09:00:31 +0000. Up 11.99 seconds.
Nov 25 09:00:31 np0005534694 systemd[1]: Finished Cloud-init: Config Stage.
Nov 25 09:00:31 np0005534694 systemd[1]: Starting Cloud-init: Final Stage...
Nov 25 09:00:31 np0005534694 dracut[1221]: dracut-057-102.git20250818.el9
Nov 25 09:00:31 np0005534694 cloud-init[1239]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 25 Nov 2025 09:00:31 +0000. Up 12.36 seconds.
Nov 25 09:00:31 np0005534694 dracut[1223]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 25 09:00:31 np0005534694 cloud-init[1253]: #############################################################
Nov 25 09:00:31 np0005534694 cloud-init[1257]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 25 09:00:31 np0005534694 cloud-init[1262]: 256 SHA256:erGBkgjld1974i8yxtIJ39WUWBesUJD3WsiNYc/HFm8 root@np0005534694 (ECDSA)
Nov 25 09:00:31 np0005534694 cloud-init[1267]: 256 SHA256:zOe/8IOWTyl4PtL7Fi+WxeE5Mc0lUL2tjmdwBA9wx64 root@np0005534694 (ED25519)
Nov 25 09:00:31 np0005534694 cloud-init[1275]: 3072 SHA256:KDCHE8IGBXyznof4SkKPJrq6fw9Xkc8RdN1iRfhCV2g root@np0005534694 (RSA)
Nov 25 09:00:31 np0005534694 cloud-init[1276]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 25 09:00:31 np0005534694 cloud-init[1277]: #############################################################
Nov 25 09:00:31 np0005534694 cloud-init[1239]: Cloud-init v. 24.4-7.el9 finished at Tue, 25 Nov 2025 09:00:31 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.51 seconds
Nov 25 09:00:31 np0005534694 sshd-session[1300]: Unable to negotiate with 192.168.26.11 port 43774: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 25 09:00:31 np0005534694 sshd-session[1312]: Unable to negotiate with 192.168.26.11 port 43792: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 25 09:00:31 np0005534694 sshd-session[1314]: Unable to negotiate with 192.168.26.11 port 43806: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 25 09:00:31 np0005534694 systemd[1]: Finished Cloud-init: Final Stage.
Nov 25 09:00:31 np0005534694 systemd[1]: Reached target Cloud-init target.
Nov 25 09:00:32 np0005534694 sshd-session[1323]: Unable to negotiate with 192.168.26.11 port 43840: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 25 09:00:32 np0005534694 sshd-session[1328]: Unable to negotiate with 192.168.26.11 port 43850: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 25 09:00:32 np0005534694 sshd-session[1287]: Connection closed by 192.168.26.11 port 43770 [preauth]
Nov 25 09:00:32 np0005534694 sshd-session[1309]: Connection closed by 192.168.26.11 port 43776 [preauth]
Nov 25 09:00:32 np0005534694 sshd-session[1316]: Connection closed by 192.168.26.11 port 43818 [preauth]
Nov 25 09:00:32 np0005534694 sshd-session[1318]: Connection closed by 192.168.26.11 port 43828 [preauth]
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 25 09:00:32 np0005534694 dracut[1223]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 25 09:00:32 np0005534694 dracut[1223]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: memstrack is not available
Nov 25 09:00:32 np0005534694 dracut[1223]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 09:00:32 np0005534694 dracut[1223]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 09:00:33 np0005534694 dracut[1223]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 09:00:33 np0005534694 dracut[1223]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 09:00:33 np0005534694 dracut[1223]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 09:00:33 np0005534694 dracut[1223]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 09:00:33 np0005534694 dracut[1223]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 09:00:33 np0005534694 dracut[1223]: memstrack is not available
Nov 25 09:00:33 np0005534694 dracut[1223]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 09:00:33 np0005534694 dracut[1223]: *** Including module: systemd ***
Nov 25 09:00:33 np0005534694 dracut[1223]: *** Including module: fips ***
Nov 25 09:00:33 np0005534694 dracut[1223]: *** Including module: systemd-initrd ***
Nov 25 09:00:33 np0005534694 dracut[1223]: *** Including module: i18n ***
Nov 25 09:00:33 np0005534694 dracut[1223]: *** Including module: drm ***
Nov 25 09:00:34 np0005534694 dracut[1223]: *** Including module: prefixdevname ***
Nov 25 09:00:34 np0005534694 dracut[1223]: *** Including module: kernel-modules ***
Nov 25 09:00:34 np0005534694 kernel: block vda: the capability attribute has been deprecated.
Nov 25 09:00:34 np0005534694 irqbalance[742]: Cannot change IRQ 45 affinity: Operation not permitted
Nov 25 09:00:34 np0005534694 irqbalance[742]: IRQ 45 affinity is now unmanaged
Nov 25 09:00:34 np0005534694 irqbalance[742]: Cannot change IRQ 44 affinity: Operation not permitted
Nov 25 09:00:34 np0005534694 irqbalance[742]: IRQ 44 affinity is now unmanaged
Nov 25 09:00:34 np0005534694 irqbalance[742]: Cannot change IRQ 42 affinity: Operation not permitted
Nov 25 09:00:34 np0005534694 irqbalance[742]: IRQ 42 affinity is now unmanaged
Nov 25 09:00:34 np0005534694 dracut[1223]: *** Including module: kernel-modules-extra ***
Nov 25 09:00:34 np0005534694 dracut[1223]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 25 09:00:34 np0005534694 dracut[1223]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 25 09:00:34 np0005534694 dracut[1223]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 25 09:00:34 np0005534694 dracut[1223]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 25 09:00:34 np0005534694 dracut[1223]: *** Including module: qemu ***
Nov 25 09:00:34 np0005534694 dracut[1223]: *** Including module: fstab-sys ***
Nov 25 09:00:34 np0005534694 dracut[1223]: *** Including module: rootfs-block ***
Nov 25 09:00:34 np0005534694 dracut[1223]: *** Including module: terminfo ***
Nov 25 09:00:34 np0005534694 dracut[1223]: *** Including module: udev-rules ***
Nov 25 09:00:35 np0005534694 dracut[1223]: Skipping udev rule: 91-permissions.rules
Nov 25 09:00:35 np0005534694 dracut[1223]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 25 09:00:35 np0005534694 dracut[1223]: *** Including module: virtiofs ***
Nov 25 09:00:35 np0005534694 dracut[1223]: *** Including module: dracut-systemd ***
Nov 25 09:00:35 np0005534694 dracut[1223]: *** Including module: usrmount ***
Nov 25 09:00:35 np0005534694 dracut[1223]: *** Including module: base ***
Nov 25 09:00:35 np0005534694 dracut[1223]: *** Including module: fs-lib ***
Nov 25 09:00:35 np0005534694 dracut[1223]: *** Including module: kdumpbase ***
Nov 25 09:00:35 np0005534694 dracut[1223]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 25 09:00:35 np0005534694 dracut[1223]:   microcode_ctl module: mangling fw_dir
Nov 25 09:00:35 np0005534694 dracut[1223]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 25 09:00:35 np0005534694 dracut[1223]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 25 09:00:35 np0005534694 dracut[1223]:     microcode_ctl: configuration "intel" is ignored
Nov 25 09:00:35 np0005534694 dracut[1223]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 25 09:00:35 np0005534694 dracut[1223]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 25 09:00:35 np0005534694 dracut[1223]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 25 09:00:36 np0005534694 dracut[1223]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 25 09:00:36 np0005534694 dracut[1223]: *** Including module: openssl ***
Nov 25 09:00:36 np0005534694 dracut[1223]: *** Including module: shutdown ***
Nov 25 09:00:36 np0005534694 dracut[1223]: *** Including module: squash ***
Nov 25 09:00:36 np0005534694 dracut[1223]: *** Including modules done ***
Nov 25 09:00:36 np0005534694 dracut[1223]: *** Installing kernel module dependencies ***
Nov 25 09:00:37 np0005534694 dracut[1223]: *** Installing kernel module dependencies done ***
Nov 25 09:00:37 np0005534694 dracut[1223]: *** Resolving executable dependencies ***
Nov 25 09:00:38 np0005534694 dracut[1223]: *** Resolving executable dependencies done ***
Nov 25 09:00:38 np0005534694 dracut[1223]: *** Generating early-microcode cpio image ***
Nov 25 09:00:38 np0005534694 dracut[1223]: *** Store current command line parameters ***
Nov 25 09:00:38 np0005534694 dracut[1223]: Stored kernel commandline:
Nov 25 09:00:38 np0005534694 dracut[1223]: No dracut internal kernel commandline stored in the initramfs
Nov 25 09:00:38 np0005534694 dracut[1223]: *** Install squash loader ***
Nov 25 09:00:39 np0005534694 dracut[1223]: *** Squashing the files inside the initramfs ***
Nov 25 09:00:39 np0005534694 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:00:40 np0005534694 dracut[1223]: *** Squashing the files inside the initramfs done ***
Nov 25 09:00:40 np0005534694 dracut[1223]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 25 09:00:40 np0005534694 dracut[1223]: *** Hardlinking files ***
Nov 25 09:00:40 np0005534694 dracut[1223]: Mode:           real
Nov 25 09:00:40 np0005534694 dracut[1223]: Files:          50
Nov 25 09:00:40 np0005534694 dracut[1223]: Linked:         0 files
Nov 25 09:00:40 np0005534694 dracut[1223]: Compared:       0 xattrs
Nov 25 09:00:40 np0005534694 dracut[1223]: Compared:       0 files
Nov 25 09:00:40 np0005534694 dracut[1223]: Saved:          0 B
Nov 25 09:00:40 np0005534694 dracut[1223]: Duration:       0.000530 seconds
Nov 25 09:00:40 np0005534694 dracut[1223]: *** Hardlinking files done ***
Nov 25 09:00:41 np0005534694 dracut[1223]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 25 09:00:41 np0005534694 kdumpctl[973]: kdump: kexec: loaded kdump kernel
Nov 25 09:00:41 np0005534694 kdumpctl[973]: kdump: Starting kdump: [OK]
Nov 25 09:00:41 np0005534694 systemd[1]: Finished Crash recovery kernel arming.
Nov 25 09:00:41 np0005534694 systemd[1]: Startup finished in 1.379s (kernel) + 2.083s (initrd) + 18.561s (userspace) = 22.024s.
Nov 25 09:00:52 np0005534694 sshd-session[4366]: Accepted publickey for zuul from 192.168.26.12 port 43220 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 25 09:00:52 np0005534694 systemd[1]: Created slice User Slice of UID 1000.
Nov 25 09:00:52 np0005534694 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 25 09:00:52 np0005534694 systemd-logind[744]: New session 1 of user zuul.
Nov 25 09:00:52 np0005534694 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 25 09:00:52 np0005534694 systemd[1]: Starting User Manager for UID 1000...
Nov 25 09:00:52 np0005534694 systemd[4370]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:00:52 np0005534694 systemd[4370]: Queued start job for default target Main User Target.
Nov 25 09:00:52 np0005534694 systemd[4370]: Created slice User Application Slice.
Nov 25 09:00:52 np0005534694 systemd[4370]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 09:00:52 np0005534694 systemd[4370]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 09:00:52 np0005534694 systemd[4370]: Reached target Paths.
Nov 25 09:00:52 np0005534694 systemd[4370]: Reached target Timers.
Nov 25 09:00:52 np0005534694 systemd[4370]: Starting D-Bus User Message Bus Socket...
Nov 25 09:00:52 np0005534694 systemd[4370]: Starting Create User's Volatile Files and Directories...
Nov 25 09:00:52 np0005534694 systemd[4370]: Finished Create User's Volatile Files and Directories.
Nov 25 09:00:52 np0005534694 systemd[4370]: Listening on D-Bus User Message Bus Socket.
Nov 25 09:00:52 np0005534694 systemd[4370]: Reached target Sockets.
Nov 25 09:00:52 np0005534694 systemd[4370]: Reached target Basic System.
Nov 25 09:00:52 np0005534694 systemd[4370]: Reached target Main User Target.
Nov 25 09:00:52 np0005534694 systemd[4370]: Startup finished in 86ms.
Nov 25 09:00:52 np0005534694 systemd[1]: Started User Manager for UID 1000.
Nov 25 09:00:52 np0005534694 systemd[1]: Started Session 1 of User zuul.
Nov 25 09:00:52 np0005534694 sshd-session[4366]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:00:53 np0005534694 python3[4452]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:00:54 np0005534694 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 09:00:54 np0005534694 python3[4480]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:01:00 np0005534694 python3[4536]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:01:01 np0005534694 CROND[4554]: (root) CMD (run-parts /etc/cron.hourly)
Nov 25 09:01:01 np0005534694 run-parts[4557]: (/etc/cron.hourly) starting 0anacron
Nov 25 09:01:01 np0005534694 anacron[4565]: Anacron started on 2025-11-25
Nov 25 09:01:01 np0005534694 anacron[4565]: Will run job `cron.daily' in 37 min.
Nov 25 09:01:01 np0005534694 anacron[4565]: Will run job `cron.weekly' in 57 min.
Nov 25 09:01:01 np0005534694 anacron[4565]: Will run job `cron.monthly' in 77 min.
Nov 25 09:01:01 np0005534694 anacron[4565]: Jobs will be executed sequentially
Nov 25 09:01:01 np0005534694 run-parts[4567]: (/etc/cron.hourly) finished 0anacron
Nov 25 09:01:01 np0005534694 CROND[4553]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 25 09:01:01 np0005534694 python3[4591]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 25 09:01:03 np0005534694 python3[4617]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+gNTjTZDQgtUOszUcfFNwRDhaF3fpuKv4WnYmO9LCSnBOvxKg32kLsWN4UIUhuvnqQCzM+/poM7RT3r9cQ1IsDccOYvVT/Wtp5oKX+m81fz8DhCMYa72X9A2pIXwxQsBgRDPh3oTqqaSR8H+rObzkL49NEB7PB37PSqa7bTT+RtyPa94m/b+vmwdC/CwfC0YTEjQEMXEM2Mx4n7pVA/kVzra/ScNFDdQaJmKWoA28J/ubqkvnvrg0+Z4ywfQ/0sBAXWNOR6LvQ2x4Rqd3uiHgobysScVRo2/+J5NDB1wN+flg8+oxSlhauY+97xKn03faiQ5y1cEiMT5A0Bhn89bTx0VUxzmNXXtQVA9xv3gSfMyOpzGaqf9n4N8yedXl6TXe+ascB5uWelrP6b2aqonb4EtqM7AZYKSLWXDwn7czhaMjUge52BUOKmb0asJdlTXpqZdVVMPfBnYGKIE8DNcp99rTtP5JwVDYKitUQAB45plvpUUYoKYI9h79SFYkhws= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:03 np0005534694 python3[4641]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:03 np0005534694 python3[4740]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:03 np0005534694 python3[4811]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764061263.5463035-251-178354257821302/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=46be1ba69aef4b9caa3787efccecaa0c_id_rsa follow=False checksum=ab873ac71b169d81ba60edcb9a3df54902eb3861 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:04 np0005534694 python3[4934]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:04 np0005534694 python3[5005]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764061264.1845975-306-31609187662607/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=46be1ba69aef4b9caa3787efccecaa0c_id_rsa.pub follow=False checksum=bf193b190ac8dfe414ab48ea4e2bf3db22ed6209 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:05 np0005534694 python3[5053]: ansible-ping Invoked with data=pong
Nov 25 09:01:06 np0005534694 python3[5077]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:01:07 np0005534694 python3[5131]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 25 09:01:08 np0005534694 python3[5163]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:09 np0005534694 python3[5187]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:09 np0005534694 python3[5211]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:09 np0005534694 python3[5235]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:09 np0005534694 python3[5259]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:09 np0005534694 python3[5283]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:11 np0005534694 sudo[5307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsmyzpyiiomeyjnzlzhthbzjbdfimcmq ; /usr/bin/python3'
Nov 25 09:01:11 np0005534694 sudo[5307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:11 np0005534694 python3[5309]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:11 np0005534694 sudo[5307]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:11 np0005534694 sudo[5385]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhwhzugwubkuyopsvmslsklpszknhivr ; /usr/bin/python3'
Nov 25 09:01:11 np0005534694 sudo[5385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:11 np0005534694 python3[5387]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:11 np0005534694 sudo[5385]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:11 np0005534694 sudo[5458]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhmptvkaupefekugxynwxpydyqvxucuu ; /usr/bin/python3'
Nov 25 09:01:11 np0005534694 sudo[5458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:11 np0005534694 python3[5460]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061271.2522264-31-274281265931958/source follow=False _original_basename=mirror_info.sh.j2 checksum=3f92644b791816833989d215b9a84c589a7b8ebd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:11 np0005534694 sudo[5458]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:12 np0005534694 python3[5508]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:12 np0005534694 python3[5532]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:12 np0005534694 python3[5556]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:13 np0005534694 python3[5580]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:13 np0005534694 python3[5604]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:13 np0005534694 python3[5628]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:13 np0005534694 python3[5652]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:13 np0005534694 python3[5676]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:14 np0005534694 python3[5700]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:14 np0005534694 python3[5724]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:14 np0005534694 python3[5748]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:14 np0005534694 python3[5772]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:15 np0005534694 python3[5796]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:15 np0005534694 python3[5820]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:15 np0005534694 python3[5844]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:15 np0005534694 python3[5868]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:15 np0005534694 python3[5892]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:16 np0005534694 python3[5916]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:16 np0005534694 python3[5940]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:16 np0005534694 python3[5964]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:16 np0005534694 python3[5988]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:16 np0005534694 python3[6012]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:17 np0005534694 python3[6036]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:17 np0005534694 python3[6060]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:17 np0005534694 python3[6084]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:17 np0005534694 python3[6108]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:01:19 np0005534694 sudo[6132]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxjrmrxertcinsdgeyrhflgbzraypnsb ; /usr/bin/python3'
Nov 25 09:01:19 np0005534694 sudo[6132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:19 np0005534694 python3[6134]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 09:01:19 np0005534694 systemd[1]: Starting Time & Date Service...
Nov 25 09:01:19 np0005534694 systemd[1]: Started Time & Date Service.
Nov 25 09:01:19 np0005534694 systemd-timedated[6136]: Changed time zone to 'UTC' (UTC).
Nov 25 09:01:19 np0005534694 sudo[6132]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:19 np0005534694 sudo[6163]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrcpapvmejttklgvxudbhtuhovjtivuv ; /usr/bin/python3'
Nov 25 09:01:19 np0005534694 sudo[6163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:20 np0005534694 python3[6165]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:20 np0005534694 sudo[6163]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:20 np0005534694 python3[6241]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:20 np0005534694 python3[6312]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764061280.163944-251-131491220359048/source _original_basename=tmp05af6kxg follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:20 np0005534694 python3[6412]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:21 np0005534694 python3[6483]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764061280.740466-301-59644737923628/source _original_basename=tmpavcszjfc follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:21 np0005534694 sudo[6583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbpuoipwmljyrbigucbaeveuypuwumdh ; /usr/bin/python3'
Nov 25 09:01:21 np0005534694 sudo[6583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:21 np0005534694 python3[6585]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:21 np0005534694 sudo[6583]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:21 np0005534694 sudo[6656]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipleljqyneqozphifsbixcujyvzoqgue ; /usr/bin/python3'
Nov 25 09:01:21 np0005534694 sudo[6656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:22 np0005534694 python3[6658]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764061281.6051984-381-2080141055906/source _original_basename=tmpe9tmm2dg follow=False checksum=43d6bf474fe3176ca4d99e899bb0d692cb0324b7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:22 np0005534694 sudo[6656]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:22 np0005534694 python3[6706]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:01:22 np0005534694 python3[6732]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:01:22 np0005534694 sudo[6810]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsetqjxiuscpxtyhxasmgxftgnhmwwjy ; /usr/bin/python3'
Nov 25 09:01:22 np0005534694 sudo[6810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:22 np0005534694 python3[6812]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:01:22 np0005534694 sudo[6810]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:23 np0005534694 sudo[6883]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvbbgxtitnnxfkolwxkurvphpigcracf ; /usr/bin/python3'
Nov 25 09:01:23 np0005534694 sudo[6883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:23 np0005534694 python3[6885]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061282.7240343-451-124705450766270/source _original_basename=tmp0_ui_jty follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:23 np0005534694 sudo[6883]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:23 np0005534694 sudo[6934]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upbmbdywiaaflfprfyszbzaxascwfvds ; /usr/bin/python3'
Nov 25 09:01:23 np0005534694 sudo[6934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:23 np0005534694 python3[6936]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e08-49e2-bfa5-76fa-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:01:23 np0005534694 sudo[6934]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:24 np0005534694 python3[6964]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                             _uses_shell=True zuul_log_id=fa163e08-49e2-bfa5-76fa-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 25 09:01:25 np0005534694 python3[6992]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:34 np0005534694 irqbalance[742]: Cannot change IRQ 43 affinity: Operation not permitted
Nov 25 09:01:34 np0005534694 irqbalance[742]: IRQ 43 affinity is now unmanaged
Nov 25 09:01:36 np0005534694 chronyd[752]: Selected source 45.79.192.248 (2.centos.pool.ntp.org)
Nov 25 09:01:41 np0005534694 sudo[7016]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsovoznkjiauluogqtehusijddehiarp ; /usr/bin/python3'
Nov 25 09:01:41 np0005534694 sudo[7016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:01:41 np0005534694 python3[7018]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:01:41 np0005534694 sudo[7016]: pam_unix(sudo:session): session closed for user root
Nov 25 09:01:49 np0005534694 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 09:02:10 np0005534694 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Nov 25 09:02:10 np0005534694 kernel: pci 0000:07:00.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 25 09:02:10 np0005534694 kernel: pci 0000:07:00.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 25 09:02:10 np0005534694 kernel: pci 0000:07:00.0: ROM [mem 0x00000000-0x0003ffff pref]
Nov 25 09:02:10 np0005534694 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]: assigned
Nov 25 09:02:10 np0005534694 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]: assigned
Nov 25 09:02:10 np0005534694 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]: assigned
Nov 25 09:02:10 np0005534694 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Nov 25 09:02:10 np0005534694 NetworkManager[809]: <info>  [1764061330.8610] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 09:02:10 np0005534694 systemd-udevd[7022]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:02:10 np0005534694 NetworkManager[809]: <info>  [1764061330.8799] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:02:10 np0005534694 NetworkManager[809]: <info>  [1764061330.8816] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 25 09:02:10 np0005534694 NetworkManager[809]: <info>  [1764061330.8818] device (eth1): carrier: link connected
Nov 25 09:02:10 np0005534694 NetworkManager[809]: <info>  [1764061330.8819] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 09:02:10 np0005534694 NetworkManager[809]: <info>  [1764061330.8823] policy: auto-activating connection 'Wired connection 1' (4dea2d72-efea-3ded-bb5c-4e572717d306)
Nov 25 09:02:10 np0005534694 NetworkManager[809]: <info>  [1764061330.8826] device (eth1): Activation: starting connection 'Wired connection 1' (4dea2d72-efea-3ded-bb5c-4e572717d306)
Nov 25 09:02:10 np0005534694 NetworkManager[809]: <info>  [1764061330.8827] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:02:10 np0005534694 NetworkManager[809]: <info>  [1764061330.8829] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:02:10 np0005534694 NetworkManager[809]: <info>  [1764061330.8831] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:02:10 np0005534694 NetworkManager[809]: <info>  [1764061330.8834] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:02:11 np0005534694 python3[7048]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e08-49e2-c32a-ccd7-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:02:21 np0005534694 sudo[7126]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxdggqqzbfwuofxlcxkkksrwqtanphld ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 25 09:02:21 np0005534694 sudo[7126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:02:21 np0005534694 python3[7128]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:02:21 np0005534694 sudo[7126]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:21 np0005534694 sudo[7199]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfpkrrwlftxldfwefwgznljeppxfdpqo ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 25 09:02:21 np0005534694 sudo[7199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:02:21 np0005534694 python3[7201]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764061340.9963691-113-78518989235234/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=7c1ac0ef12414f187590b184b7f3279f7633d6ff backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:02:21 np0005534694 sudo[7199]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:21 np0005534694 sudo[7249]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnhexfthrzahtxxjtzqgwerqdqecthhx ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 25 09:02:21 np0005534694 sudo[7249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:02:21 np0005534694 python3[7251]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:02:22 np0005534694 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 09:02:22 np0005534694 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 09:02:22 np0005534694 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 09:02:22 np0005534694 NetworkManager[809]: <info>  [1764061342.0063] caught SIGTERM, shutting down normally.
Nov 25 09:02:22 np0005534694 systemd[1]: Stopping Network Manager...
Nov 25 09:02:22 np0005534694 NetworkManager[809]: <info>  [1764061342.0068] dhcp4 (eth0): canceled DHCP transaction
Nov 25 09:02:22 np0005534694 NetworkManager[809]: <info>  [1764061342.0068] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:02:22 np0005534694 NetworkManager[809]: <info>  [1764061342.0068] dhcp4 (eth0): state changed no lease
Nov 25 09:02:22 np0005534694 NetworkManager[809]: <info>  [1764061342.0070] dhcp6 (eth0): canceled DHCP transaction
Nov 25 09:02:22 np0005534694 NetworkManager[809]: <info>  [1764061342.0070] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:02:22 np0005534694 NetworkManager[809]: <info>  [1764061342.0070] dhcp6 (eth0): state changed no lease
Nov 25 09:02:22 np0005534694 NetworkManager[809]: <info>  [1764061342.0071] manager: NetworkManager state is now CONNECTING
Nov 25 09:02:22 np0005534694 NetworkManager[809]: <info>  [1764061342.0149] dhcp4 (eth1): canceled DHCP transaction
Nov 25 09:02:22 np0005534694 NetworkManager[809]: <info>  [1764061342.0150] dhcp4 (eth1): state changed no lease
Nov 25 09:02:22 np0005534694 NetworkManager[809]: <info>  [1764061342.0168] exiting (success)
Nov 25 09:02:22 np0005534694 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:02:22 np0005534694 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:02:22 np0005534694 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 09:02:22 np0005534694 systemd[1]: Stopped Network Manager.
Nov 25 09:02:22 np0005534694 systemd[1]: Starting Network Manager...
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.0561] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:f8c189f2-455d-46a5-8a09-714641cd81d1)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.0562] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.0600] manager[0x55e525981090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 09:02:22 np0005534694 systemd[1]: Starting Hostname Service...
Nov 25 09:02:22 np0005534694 systemd[1]: Started Hostname Service.
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1093] hostname: hostname: using hostnamed
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1093] hostname: static hostname changed from (none) to "np0005534694"
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1095] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1098] manager[0x55e525981090]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1098] manager[0x55e525981090]: rfkill: WWAN hardware radio set enabled
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1117] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1117] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1118] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1118] manager: Networking is enabled by state file
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1120] settings: Loaded settings plugin: keyfile (internal)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1123] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1139] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1146] dhcp: init: Using DHCP client 'internal'
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1149] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1153] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1157] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1163] device (lo): Activation: starting connection 'lo' (28ef2950-3e02-469b-b897-6f6f0a688c29)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1169] device (eth0): carrier: link connected
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1173] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1176] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1177] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1182] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1187] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1191] device (eth1): carrier: link connected
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1195] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1199] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (4dea2d72-efea-3ded-bb5c-4e572717d306) (indicated)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1200] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1204] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1209] device (eth1): Activation: starting connection 'Wired connection 1' (4dea2d72-efea-3ded-bb5c-4e572717d306)
Nov 25 09:02:22 np0005534694 systemd[1]: Started Network Manager.
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1213] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1216] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1219] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1230] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1231] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1233] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1234] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1236] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1237] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1241] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1260] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1264] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1266] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1272] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1275] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1280] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1285] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1289] device (lo): Activation: successful, device activated.
Nov 25 09:02:22 np0005534694 systemd[1]: Starting Network Manager Wait Online...
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1304] dhcp4 (eth0): state changed new lease, address=192.168.26.109
Nov 25 09:02:22 np0005534694 NetworkManager[7262]: <info>  [1764061342.1312] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 09:02:22 np0005534694 sudo[7249]: pam_unix(sudo:session): session closed for user root
Nov 25 09:02:22 np0005534694 python3[7323]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e08-49e2-c32a-ccd7-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:02:23 np0005534694 NetworkManager[7262]: <info>  [1764061343.2022] dhcp6 (eth0): state changed new lease, address=2001:db8::1c1
Nov 25 09:02:23 np0005534694 NetworkManager[7262]: <info>  [1764061343.2031] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 09:02:23 np0005534694 NetworkManager[7262]: <info>  [1764061343.2051] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 09:02:23 np0005534694 NetworkManager[7262]: <info>  [1764061343.2053] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 09:02:23 np0005534694 NetworkManager[7262]: <info>  [1764061343.2056] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 09:02:23 np0005534694 NetworkManager[7262]: <info>  [1764061343.2061] device (eth0): Activation: successful, device activated.
Nov 25 09:02:23 np0005534694 NetworkManager[7262]: <info>  [1764061343.2064] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 09:02:33 np0005534694 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:02:52 np0005534694 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.3848] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 09:03:07 np0005534694 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:03:07 np0005534694 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4071] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4073] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4078] device (eth1): Activation: successful, device activated.
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4082] manager: startup complete
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4083] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <warn>  [1764061387.4087] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4092] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 25 09:03:07 np0005534694 systemd[1]: Finished Network Manager Wait Online.
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4149] dhcp4 (eth1): canceled DHCP transaction
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4149] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4149] dhcp4 (eth1): state changed no lease
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4158] policy: auto-activating connection 'ci-private-network' (8d7be350-b956-5589-a7f6-ba574a72fbd9)
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4161] device (eth1): Activation: starting connection 'ci-private-network' (8d7be350-b956-5589-a7f6-ba574a72fbd9)
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4162] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4163] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4168] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4174] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4196] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4197] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:03:07 np0005534694 NetworkManager[7262]: <info>  [1764061387.4201] device (eth1): Activation: successful, device activated.
Nov 25 09:03:14 np0005534694 systemd[4370]: Starting Mark boot as successful...
Nov 25 09:03:14 np0005534694 systemd[4370]: Finished Mark boot as successful.
Nov 25 09:03:17 np0005534694 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:03:22 np0005534694 sshd-session[4379]: Received disconnect from 192.168.26.12 port 43220:11: disconnected by user
Nov 25 09:03:22 np0005534694 sshd-session[4379]: Disconnected from user zuul 192.168.26.12 port 43220
Nov 25 09:03:22 np0005534694 sshd-session[4366]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:03:22 np0005534694 systemd-logind[744]: Session 1 logged out. Waiting for processes to exit.
Nov 25 09:03:44 np0005534694 sshd-session[7371]: Accepted publickey for zuul from 192.168.26.12 port 56170 ssh2: RSA SHA256:s7IOmVGBFERPpXYPL/Wxp3ltfNRkS78sM3fXgIDzVB4
Nov 25 09:03:44 np0005534694 systemd-logind[744]: New session 3 of user zuul.
Nov 25 09:03:44 np0005534694 systemd[1]: Started Session 3 of User zuul.
Nov 25 09:03:44 np0005534694 sshd-session[7371]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:03:44 np0005534694 sudo[7450]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebwvzlmzatdlvtevpdwknyfakmajrcqj ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 25 09:03:44 np0005534694 sudo[7450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:03:44 np0005534694 python3[7452]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:03:44 np0005534694 sudo[7450]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:44 np0005534694 sudo[7523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tukuvyckvhcfxswqdrgptghbvfciwxlv ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 25 09:03:44 np0005534694 sudo[7523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:03:44 np0005534694 python3[7525]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061424.284707-379-67648858841313/source _original_basename=tmp3gzhqmr0 follow=False checksum=5493b85a684a9b4806ca892e69594374a0bfd8b8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:03:44 np0005534694 sudo[7523]: pam_unix(sudo:session): session closed for user root
Nov 25 09:03:45 np0005534694 chronyd[752]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Nov 25 09:03:47 np0005534694 sshd-session[7374]: Connection closed by 192.168.26.12 port 56170
Nov 25 09:03:47 np0005534694 sshd-session[7371]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:03:47 np0005534694 systemd[1]: session-3.scope: Deactivated successfully.
Nov 25 09:03:47 np0005534694 systemd-logind[744]: Session 3 logged out. Waiting for processes to exit.
Nov 25 09:03:47 np0005534694 systemd-logind[744]: Removed session 3.
Nov 25 09:06:14 np0005534694 systemd[4370]: Created slice User Background Tasks Slice.
Nov 25 09:06:14 np0005534694 systemd[4370]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 09:06:14 np0005534694 systemd[4370]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 09:08:20 np0005534694 sshd-session[7554]: Accepted publickey for zuul from 192.168.26.12 port 53748 ssh2: RSA SHA256:s7IOmVGBFERPpXYPL/Wxp3ltfNRkS78sM3fXgIDzVB4
Nov 25 09:08:20 np0005534694 systemd-logind[744]: New session 4 of user zuul.
Nov 25 09:08:20 np0005534694 systemd[1]: Started Session 4 of User zuul.
Nov 25 09:08:20 np0005534694 sshd-session[7554]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:08:21 np0005534694 sudo[7581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlfluliqpvditzrillyzdxoczsiznpzt ; /usr/bin/python3'
Nov 25 09:08:21 np0005534694 sudo[7581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:21 np0005534694 python3[7583]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                             _uses_shell=True zuul_log_id=fa163e08-49e2-292d-b97b-000000001cda-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:08:21 np0005534694 sudo[7581]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:21 np0005534694 sudo[7610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmdfkfykyesifpnzialrikahlifvrzor ; /usr/bin/python3'
Nov 25 09:08:21 np0005534694 sudo[7610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:21 np0005534694 python3[7612]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:08:21 np0005534694 sudo[7610]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:21 np0005534694 sudo[7636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejlxasjitxblgwlbrvbrfjxhzdzqjmhd ; /usr/bin/python3'
Nov 25 09:08:21 np0005534694 sudo[7636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:21 np0005534694 python3[7638]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:08:21 np0005534694 sudo[7636]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:21 np0005534694 sudo[7662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blkbcpswgckswxxjrzpnieofzygpjkni ; /usr/bin/python3'
Nov 25 09:08:21 np0005534694 sudo[7662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:21 np0005534694 python3[7664]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:08:21 np0005534694 sudo[7662]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:21 np0005534694 sudo[7688]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsgbpyyxbqobqflbuxyylxqdqvpqprqh ; /usr/bin/python3'
Nov 25 09:08:21 np0005534694 sudo[7688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:22 np0005534694 python3[7690]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:08:22 np0005534694 sudo[7688]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:22 np0005534694 sudo[7714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcyngntefjqcbzcfcdzqrhxjhcmethwq ; /usr/bin/python3'
Nov 25 09:08:22 np0005534694 sudo[7714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:22 np0005534694 python3[7716]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:08:22 np0005534694 sudo[7714]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:22 np0005534694 sudo[7792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cblukigeyijwiywlsucrodxnjbkocrdy ; /usr/bin/python3'
Nov 25 09:08:22 np0005534694 sudo[7792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:23 np0005534694 python3[7794]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:08:23 np0005534694 sudo[7792]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:23 np0005534694 sudo[7865]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmkecgsxufdxatuvftvuwugxwrrzuhvm ; /usr/bin/python3'
Nov 25 09:08:23 np0005534694 sudo[7865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:23 np0005534694 python3[7867]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061702.8314457-510-134179035950197/source _original_basename=tmpi1rxso8c follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:08:23 np0005534694 sudo[7865]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:23 np0005534694 sudo[7915]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgakajpypgkywiolvxeyynjchgxkkopy ; /usr/bin/python3'
Nov 25 09:08:23 np0005534694 sudo[7915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:24 np0005534694 python3[7917]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:08:24 np0005534694 systemd[1]: Reloading.
Nov 25 09:08:24 np0005534694 systemd-rc-local-generator[7936]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:08:24 np0005534694 sudo[7915]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:25 np0005534694 sudo[7971]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dggivruzxpmezxbbfjwhxrseecfbyhva ; /usr/bin/python3'
Nov 25 09:08:25 np0005534694 sudo[7971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:25 np0005534694 python3[7973]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 25 09:08:25 np0005534694 sudo[7971]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:25 np0005534694 sudo[7997]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzdurgobtjrnvnvjihkzionyhhhmlhya ; /usr/bin/python3'
Nov 25 09:08:25 np0005534694 sudo[7997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:25 np0005534694 python3[7999]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:08:25 np0005534694 sudo[7997]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:25 np0005534694 sudo[8025]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odeqofmnjiuxwelwxvkzzbwbvftxloaw ; /usr/bin/python3'
Nov 25 09:08:25 np0005534694 sudo[8025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:25 np0005534694 python3[8027]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:08:25 np0005534694 sudo[8025]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:25 np0005534694 sudo[8053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luaoyftcavopqpuvxvvxqspabwkfukvv ; /usr/bin/python3'
Nov 25 09:08:25 np0005534694 sudo[8053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:26 np0005534694 python3[8055]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:08:26 np0005534694 sudo[8053]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:26 np0005534694 sudo[8081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-infpvqheuaigekbdsdqiytqyexfnuzno ; /usr/bin/python3'
Nov 25 09:08:26 np0005534694 sudo[8081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:26 np0005534694 python3[8083]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:08:26 np0005534694 sudo[8081]: pam_unix(sudo:session): session closed for user root
Nov 25 09:08:26 np0005534694 python3[8110]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                             _uses_shell=True zuul_log_id=fa163e08-49e2-292d-b97b-000000001ce1-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:08:27 np0005534694 python3[8140]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 09:08:29 np0005534694 sshd-session[7557]: Connection closed by 192.168.26.12 port 53748
Nov 25 09:08:29 np0005534694 sshd-session[7554]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:08:29 np0005534694 systemd-logind[744]: Session 4 logged out. Waiting for processes to exit.
Nov 25 09:08:29 np0005534694 systemd[1]: session-4.scope: Deactivated successfully.
Nov 25 09:08:29 np0005534694 systemd[1]: session-4.scope: Consumed 2.852s CPU time.
Nov 25 09:08:29 np0005534694 systemd-logind[744]: Removed session 4.
Nov 25 09:08:31 np0005534694 sshd-session[8145]: Accepted publickey for zuul from 192.168.26.12 port 34318 ssh2: RSA SHA256:s7IOmVGBFERPpXYPL/Wxp3ltfNRkS78sM3fXgIDzVB4
Nov 25 09:08:31 np0005534694 systemd-logind[744]: New session 5 of user zuul.
Nov 25 09:08:31 np0005534694 systemd[1]: Started Session 5 of User zuul.
Nov 25 09:08:31 np0005534694 sshd-session[8145]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:08:31 np0005534694 sudo[8172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syzrskndticegbxywxvhazlvjinnswen ; /usr/bin/python3'
Nov 25 09:08:31 np0005534694 sudo[8172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:08:31 np0005534694 python3[8174]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 09:08:46 np0005534694 kernel: SELinux:  Converting 387 SID table entries...
Nov 25 09:08:46 np0005534694 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:08:46 np0005534694 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:08:46 np0005534694 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:08:46 np0005534694 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:08:46 np0005534694 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:08:46 np0005534694 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:08:46 np0005534694 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:08:53 np0005534694 kernel: SELinux:  Converting 387 SID table entries...
Nov 25 09:08:53 np0005534694 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:08:53 np0005534694 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:08:53 np0005534694 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:08:53 np0005534694 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:08:53 np0005534694 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:08:53 np0005534694 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:08:53 np0005534694 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:08:59 np0005534694 kernel: SELinux:  Converting 387 SID table entries...
Nov 25 09:08:59 np0005534694 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:08:59 np0005534694 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:08:59 np0005534694 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:08:59 np0005534694 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:08:59 np0005534694 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:08:59 np0005534694 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:08:59 np0005534694 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:09:00 np0005534694 setsebool[8241]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 25 09:09:00 np0005534694 setsebool[8241]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 25 09:09:08 np0005534694 kernel: SELinux:  Converting 390 SID table entries...
Nov 25 09:09:08 np0005534694 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:09:08 np0005534694 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:09:08 np0005534694 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:09:08 np0005534694 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:09:08 np0005534694 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:09:08 np0005534694 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:09:08 np0005534694 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:09:21 np0005534694 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 09:09:21 np0005534694 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:09:21 np0005534694 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:09:21 np0005534694 systemd[1]: Reloading.
Nov 25 09:09:21 np0005534694 systemd-rc-local-generator[8991]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:09:22 np0005534694 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:09:22 np0005534694 sudo[8172]: pam_unix(sudo:session): session closed for user root
Nov 25 09:09:35 np0005534694 python3[21916]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                              _uses_shell=True zuul_log_id=fa163e08-49e2-29e8-c457-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:09:36 np0005534694 kernel: evm: overlay not supported
Nov 25 09:09:36 np0005534694 systemd[4370]: Starting D-Bus User Message Bus...
Nov 25 09:09:36 np0005534694 dbus-broker-launch[22642]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 25 09:09:36 np0005534694 dbus-broker-launch[22642]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 25 09:09:36 np0005534694 systemd[4370]: Started D-Bus User Message Bus.
Nov 25 09:09:36 np0005534694 dbus-broker-lau[22642]: Ready
Nov 25 09:09:36 np0005534694 systemd[4370]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 09:09:36 np0005534694 systemd[4370]: Created slice Slice /user.
Nov 25 09:09:36 np0005534694 systemd[4370]: podman-22566.scope: unit configures an IP firewall, but not running as root.
Nov 25 09:09:36 np0005534694 systemd[4370]: (This warning is only shown for the first unit using IP firewalling.)
Nov 25 09:09:36 np0005534694 systemd[4370]: Started podman-22566.scope.
Nov 25 09:09:36 np0005534694 systemd[4370]: Started podman-pause-085f84f7.scope.
Nov 25 09:09:37 np0005534694 sshd-session[8148]: Connection closed by 192.168.26.12 port 34318
Nov 25 09:09:37 np0005534694 sshd-session[8145]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:09:37 np0005534694 systemd[1]: session-5.scope: Deactivated successfully.
Nov 25 09:09:37 np0005534694 systemd[1]: session-5.scope: Consumed 43.752s CPU time.
Nov 25 09:09:37 np0005534694 systemd-logind[744]: Session 5 logged out. Waiting for processes to exit.
Nov 25 09:09:37 np0005534694 systemd-logind[744]: Removed session 5.
Nov 25 09:09:43 np0005534694 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:09:43 np0005534694 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:09:43 np0005534694 systemd[1]: man-db-cache-update.service: Consumed 27.012s CPU time.
Nov 25 09:09:43 np0005534694 systemd[1]: run-r722e0a0cff9e457896d2c2b831e53dc7.service: Deactivated successfully.
Nov 25 09:09:52 np0005534694 sshd-session[29617]: Connection closed by 192.168.26.191 port 54636 [preauth]
Nov 25 09:09:52 np0005534694 sshd-session[29618]: Connection closed by 192.168.26.191 port 54644 [preauth]
Nov 25 09:09:52 np0005534694 sshd-session[29619]: Unable to negotiate with 192.168.26.191 port 54660: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 25 09:09:52 np0005534694 sshd-session[29620]: Unable to negotiate with 192.168.26.191 port 54670: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 25 09:09:52 np0005534694 sshd-session[29621]: Unable to negotiate with 192.168.26.191 port 54682: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 25 09:10:02 np0005534694 sshd-session[29627]: Accepted publickey for zuul from 192.168.26.12 port 40084 ssh2: RSA SHA256:s7IOmVGBFERPpXYPL/Wxp3ltfNRkS78sM3fXgIDzVB4
Nov 25 09:10:02 np0005534694 systemd-logind[744]: New session 6 of user zuul.
Nov 25 09:10:02 np0005534694 systemd[1]: Started Session 6 of User zuul.
Nov 25 09:10:02 np0005534694 sshd-session[29627]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:10:02 np0005534694 python3[29654]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB4HO9/Pb272mGI+U/szIpw/9oHLx4rtGIraRz1dlV41+TJMU38ktCW6c/rIbXW5YjEe8m7up3kNe2OypGHdxy8= zuul@np0005534693
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:10:02 np0005534694 sudo[29678]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oywpsltwpeidqessvenrdzidtlhakzrd ; /usr/bin/python3'
Nov 25 09:10:02 np0005534694 sudo[29678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:10:02 np0005534694 python3[29680]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB4HO9/Pb272mGI+U/szIpw/9oHLx4rtGIraRz1dlV41+TJMU38ktCW6c/rIbXW5YjEe8m7up3kNe2OypGHdxy8= zuul@np0005534693
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:10:02 np0005534694 sudo[29678]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:03 np0005534694 sudo[29704]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxkyhomctmjluzywzjwuhmywilxdruot ; /usr/bin/python3'
Nov 25 09:10:03 np0005534694 sudo[29704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:10:03 np0005534694 python3[29706]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005534694 update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 25 09:10:03 np0005534694 useradd[29708]: new group: name=cloud-admin, GID=1002
Nov 25 09:10:03 np0005534694 useradd[29708]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 25 09:10:03 np0005534694 sudo[29704]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:03 np0005534694 sudo[29738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nflhcyrkhutitjndrzutvzcwxouiauhh ; /usr/bin/python3'
Nov 25 09:10:03 np0005534694 sudo[29738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:10:03 np0005534694 python3[29740]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB4HO9/Pb272mGI+U/szIpw/9oHLx4rtGIraRz1dlV41+TJMU38ktCW6c/rIbXW5YjEe8m7up3kNe2OypGHdxy8= zuul@np0005534693
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:10:03 np0005534694 sudo[29738]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:03 np0005534694 sudo[29816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvbgvyzrajssovybsjtnomzuvjjrieuu ; /usr/bin/python3'
Nov 25 09:10:03 np0005534694 sudo[29816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:10:03 np0005534694 python3[29818]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:10:03 np0005534694 sudo[29816]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:04 np0005534694 sudo[29889]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bniyjmmmpuvwuxjhpwzoiczndvzogcib ; /usr/bin/python3'
Nov 25 09:10:04 np0005534694 sudo[29889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:10:04 np0005534694 python3[29891]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061803.729116-152-51618234821601/source _original_basename=tmptpnveux1 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:10:04 np0005534694 sudo[29889]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:04 np0005534694 sudo[29939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojcxjjdqiumcjhktkjmeorxachswtkpa ; /usr/bin/python3'
Nov 25 09:10:04 np0005534694 sudo[29939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:10:05 np0005534694 python3[29941]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 25 09:10:05 np0005534694 systemd[1]: Starting Hostname Service...
Nov 25 09:10:05 np0005534694 systemd[1]: Started Hostname Service.
Nov 25 09:10:05 np0005534694 systemd-hostnamed[29945]: Changed pretty hostname to 'compute-0'
Nov 25 09:10:05 compute-0 systemd-hostnamed[29945]: Hostname set to <compute-0> (static)
Nov 25 09:10:05 compute-0 NetworkManager[7262]: <info>  [1764061805.1432] hostname: static hostname changed from "np0005534694" to "compute-0"
Nov 25 09:10:05 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:10:05 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:10:05 compute-0 sudo[29939]: pam_unix(sudo:session): session closed for user root
Nov 25 09:10:05 compute-0 sshd-session[29630]: Connection closed by 192.168.26.12 port 40084
Nov 25 09:10:05 compute-0 sshd-session[29627]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:10:05 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 25 09:10:05 compute-0 systemd[1]: session-6.scope: Consumed 1.629s CPU time.
Nov 25 09:10:05 compute-0 systemd-logind[744]: Session 6 logged out. Waiting for processes to exit.
Nov 25 09:10:05 compute-0 systemd-logind[744]: Removed session 6.
Nov 25 09:10:15 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:10:35 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 09:13:24 compute-0 sshd-session[29964]: Accepted publickey for zuul from 192.168.26.191 port 48568 ssh2: RSA SHA256:s7IOmVGBFERPpXYPL/Wxp3ltfNRkS78sM3fXgIDzVB4
Nov 25 09:13:24 compute-0 systemd-logind[744]: New session 7 of user zuul.
Nov 25 09:13:24 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 25 09:13:24 compute-0 sshd-session[29964]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:13:24 compute-0 python3[30040]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:13:26 compute-0 sudo[30150]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iezojdwrgedicefwlaxhyhaggvtsacvl ; /usr/bin/python3'
Nov 25 09:13:26 compute-0 sudo[30150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:26 compute-0 python3[30152]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:26 compute-0 sudo[30150]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:26 compute-0 sudo[30223]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blevfcgazgfifqsezosnnwiawtwvchqh ; /usr/bin/python3'
Nov 25 09:13:26 compute-0 sudo[30223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:26 compute-0 python3[30225]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9338098-34381-78849199251242/source mode=0755 _original_basename=delorean.repo follow=False checksum=cdee622b4b81aba8f448eb3a0d6bf38022474867 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:26 compute-0 sudo[30223]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:26 compute-0 sudo[30249]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luxskoklgrtfwysqpxbryzneytliinhx ; /usr/bin/python3'
Nov 25 09:13:26 compute-0 sudo[30249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:26 compute-0 python3[30251]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:26 compute-0 sudo[30249]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:26 compute-0 sudo[30322]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxrubdgtqpctgspppswcqxmvvkngenul ; /usr/bin/python3'
Nov 25 09:13:26 compute-0 sudo[30322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:26 compute-0 python3[30324]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9338098-34381-78849199251242/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=717d1fa230cffa8c08764d71bd0b4a50d3a90cae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:26 compute-0 sudo[30322]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:26 compute-0 sudo[30348]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhnphqgwuxkjkegqtenyquitebpyvtlv ; /usr/bin/python3'
Nov 25 09:13:26 compute-0 sudo[30348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:27 compute-0 python3[30350]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:27 compute-0 sudo[30348]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:27 compute-0 sudo[30421]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afxaizgoxahhyeltsslyaxgpbpevntmk ; /usr/bin/python3'
Nov 25 09:13:27 compute-0 sudo[30421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:27 compute-0 python3[30423]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9338098-34381-78849199251242/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=8163d09913b97597f86e38eb45c3003e91da783e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:27 compute-0 sudo[30421]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:27 compute-0 sudo[30447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymbckahsporziukhafdkxrredztczjgl ; /usr/bin/python3'
Nov 25 09:13:27 compute-0 sudo[30447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:27 compute-0 python3[30449]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:27 compute-0 sudo[30447]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:27 compute-0 sudo[30520]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzwrozvczpcndbpewmfovjzkhktsymki ; /usr/bin/python3'
Nov 25 09:13:27 compute-0 sudo[30520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:27 compute-0 python3[30522]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9338098-34381-78849199251242/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=d108d0750ad5b288ccc41bc6534ea307cc51e987 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:27 compute-0 sudo[30520]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:27 compute-0 sudo[30546]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elnzgzmjwshpwjbclkpvvaiwvudqqpwt ; /usr/bin/python3'
Nov 25 09:13:27 compute-0 sudo[30546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:27 compute-0 python3[30548]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:27 compute-0 sudo[30546]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:28 compute-0 sudo[30619]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvtfsyeixcvwzmvragkrkpwtderpuqbx ; /usr/bin/python3'
Nov 25 09:13:28 compute-0 sudo[30619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:28 compute-0 python3[30621]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9338098-34381-78849199251242/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=20c3917c672c059a872cf09a437f61890d2f89fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:28 compute-0 sudo[30619]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:28 compute-0 sudo[30645]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzzmicdivfmdfvqgyoumoxufmjwbzmfy ; /usr/bin/python3'
Nov 25 09:13:28 compute-0 sudo[30645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:28 compute-0 python3[30647]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:28 compute-0 sudo[30645]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:28 compute-0 sudo[30718]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnwuyigyooducplucgkzkpgmbwacqjiy ; /usr/bin/python3'
Nov 25 09:13:28 compute-0 sudo[30718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:28 compute-0 python3[30720]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9338098-34381-78849199251242/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=4d14f168e8a0e6930d905faffbcdf4fedd6664d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:28 compute-0 sudo[30718]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:28 compute-0 sudo[30744]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuxyrhivprwzltufbfthbxrwtgqvjasr ; /usr/bin/python3'
Nov 25 09:13:28 compute-0 sudo[30744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:28 compute-0 python3[30746]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:13:28 compute-0 sudo[30744]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:28 compute-0 sudo[30817]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urwaycnalolinhrczunwrnloqlscdvte ; /usr/bin/python3'
Nov 25 09:13:28 compute-0 sudo[30817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:13:28 compute-0 python3[30819]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9338098-34381-78849199251242/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:13:28 compute-0 sudo[30817]: pam_unix(sudo:session): session closed for user root
Nov 25 09:13:30 compute-0 sshd-session[30844]: Connection closed by 192.168.122.11 port 36452 [preauth]
Nov 25 09:13:30 compute-0 sshd-session[30845]: Connection closed by 192.168.122.11 port 36466 [preauth]
Nov 25 09:13:30 compute-0 sshd-session[30846]: Unable to negotiate with 192.168.122.11 port 36480: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 25 09:13:30 compute-0 sshd-session[30847]: Unable to negotiate with 192.168.122.11 port 36488: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 25 09:13:30 compute-0 sshd-session[30848]: Unable to negotiate with 192.168.122.11 port 36490: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 25 09:13:37 compute-0 python3[30877]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:16:04 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 25 09:16:04 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 25 09:16:04 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 25 09:16:04 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 25 09:18:37 compute-0 sshd-session[29967]: Received disconnect from 192.168.26.191 port 48568:11: disconnected by user
Nov 25 09:18:37 compute-0 sshd-session[29967]: Disconnected from user zuul 192.168.26.191 port 48568
Nov 25 09:18:37 compute-0 sshd-session[29964]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:18:37 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 25 09:18:37 compute-0 systemd[1]: session-7.scope: Consumed 3.263s CPU time.
Nov 25 09:18:37 compute-0 systemd-logind[744]: Session 7 logged out. Waiting for processes to exit.
Nov 25 09:18:37 compute-0 systemd-logind[744]: Removed session 7.
Nov 25 09:23:02 compute-0 sshd-session[30881]: Accepted publickey for zuul from 192.168.122.30 port 51624 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:23:02 compute-0 systemd-logind[744]: New session 8 of user zuul.
Nov 25 09:23:02 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 25 09:23:02 compute-0 sshd-session[30881]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:23:03 compute-0 python3.9[31034]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:23:04 compute-0 sudo[31213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pluvffebrhizkuznalbplwprwvyttplj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062583.8408072-56-57103219134485/AnsiballZ_command.py'
Nov 25 09:23:04 compute-0 sudo[31213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:04 compute-0 python3.9[31215]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:23:12 compute-0 sudo[31213]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:12 compute-0 sshd-session[30884]: Connection closed by 192.168.122.30 port 51624
Nov 25 09:23:12 compute-0 sshd-session[30881]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:23:12 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 25 09:23:12 compute-0 systemd[1]: session-8.scope: Consumed 5.975s CPU time.
Nov 25 09:23:12 compute-0 systemd-logind[744]: Session 8 logged out. Waiting for processes to exit.
Nov 25 09:23:12 compute-0 systemd-logind[744]: Removed session 8.
Nov 25 09:23:28 compute-0 sshd-session[31273]: Accepted publickey for zuul from 192.168.122.30 port 41714 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:23:28 compute-0 systemd-logind[744]: New session 9 of user zuul.
Nov 25 09:23:28 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 25 09:23:28 compute-0 sshd-session[31273]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:23:28 compute-0 python3.9[31426]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 09:23:29 compute-0 python3.9[31600]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:23:30 compute-0 sudo[31750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtbswuregfsyswtedffecpxmablwywlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062610.048202-93-116503096745250/AnsiballZ_command.py'
Nov 25 09:23:30 compute-0 sudo[31750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:30 compute-0 python3.9[31752]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:23:30 compute-0 sudo[31750]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:31 compute-0 sudo[31903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgqnqvkinzfqaefrmezqiaqqevtcxzhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062610.8958392-129-103913755249267/AnsiballZ_stat.py'
Nov 25 09:23:31 compute-0 sudo[31903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:31 compute-0 python3.9[31905]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:23:31 compute-0 sudo[31903]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:31 compute-0 sudo[32055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycnkpmtcvidvjhfufjsqyyybnftdtxjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062611.535775-153-18362794641739/AnsiballZ_file.py'
Nov 25 09:23:31 compute-0 sudo[32055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:32 compute-0 python3.9[32057]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:23:32 compute-0 sudo[32055]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:32 compute-0 sudo[32207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyrkueyxmgfijwlytyblcpfhexhtpjbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062612.1570413-177-277410829638712/AnsiballZ_stat.py'
Nov 25 09:23:32 compute-0 sudo[32207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:32 compute-0 python3.9[32209]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:23:32 compute-0 sudo[32207]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:32 compute-0 sudo[32330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzerczbylbsxtutcjjlgiakmwrqaixgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062612.1570413-177-277410829638712/AnsiballZ_copy.py'
Nov 25 09:23:32 compute-0 sudo[32330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:33 compute-0 python3.9[32332]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062612.1570413-177-277410829638712/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:23:33 compute-0 sudo[32330]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:33 compute-0 sudo[32482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtbnkotcqxlkzknlcievwjsfmwzbkcgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062613.154892-222-25598059608315/AnsiballZ_setup.py'
Nov 25 09:23:33 compute-0 sudo[32482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:33 compute-0 python3.9[32484]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:23:33 compute-0 sudo[32482]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:34 compute-0 sudo[32638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjontuedtxptecnqpoicvvsndkcipdyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062613.9104514-246-111097670203314/AnsiballZ_file.py'
Nov 25 09:23:34 compute-0 sudo[32638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:34 compute-0 python3.9[32640]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:23:34 compute-0 sudo[32638]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:34 compute-0 sudo[32790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdsbwqmxypqrlzhldwyyciewexlxmajz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062614.451275-273-254575485946033/AnsiballZ_file.py'
Nov 25 09:23:34 compute-0 sudo[32790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:34 compute-0 python3.9[32792]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:23:34 compute-0 sudo[32790]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:35 compute-0 python3.9[32942]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:23:37 compute-0 python3.9[33195]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:23:38 compute-0 python3.9[33345]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:23:39 compute-0 python3.9[33499]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:23:39 compute-0 sudo[33655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynzpuspyabsgqggkuumanutvpwmrzofr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062619.594236-417-20151158593629/AnsiballZ_setup.py'
Nov 25 09:23:39 compute-0 sudo[33655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:40 compute-0 python3.9[33657]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:23:40 compute-0 sudo[33655]: pam_unix(sudo:session): session closed for user root
Nov 25 09:23:40 compute-0 sudo[33739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqjcpiaxefcbnumyyywohbgamwcwtnmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062619.594236-417-20151158593629/AnsiballZ_dnf.py'
Nov 25 09:23:40 compute-0 sudo[33739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:23:40 compute-0 python3.9[33741]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:25:02 compute-0 systemd[1]: Reloading.
Nov 25 09:25:03 compute-0 systemd-rc-local-generator[33940]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:25:03 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 25 09:25:03 compute-0 systemd[1]: Reloading.
Nov 25 09:25:03 compute-0 systemd-rc-local-generator[33980]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:25:03 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 25 09:25:03 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 25 09:25:03 compute-0 systemd[1]: Reloading.
Nov 25 09:25:03 compute-0 systemd-rc-local-generator[34021]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:25:03 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 25 09:25:03 compute-0 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Nov 25 09:25:03 compute-0 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Nov 25 09:25:03 compute-0 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Nov 25 09:25:46 compute-0 kernel: SELinux:  Converting 2716 SID table entries...
Nov 25 09:25:46 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:25:46 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:25:46 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:25:46 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:25:46 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:25:46 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:25:46 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:25:46 compute-0 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 25 09:25:46 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:25:46 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:25:46 compute-0 systemd[1]: Reloading.
Nov 25 09:25:46 compute-0 systemd-rc-local-generator[34322]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:25:46 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:25:46 compute-0 sudo[33739]: pam_unix(sudo:session): session closed for user root
Nov 25 09:25:47 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:25:47 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:25:47 compute-0 systemd[1]: run-r7580d07596a04782a1c58083aa9fcc8a.service: Deactivated successfully.
Nov 25 09:26:01 compute-0 sudo[35235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akxopntfuasjxtzyifauzfpybncutjlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062761.6396542-453-114858185269143/AnsiballZ_command.py'
Nov 25 09:26:01 compute-0 sudo[35235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:01 compute-0 python3.9[35237]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:02 compute-0 sudo[35235]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:03 compute-0 sudo[35516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oshqwrisovouxzouqwzfybtqiairjckv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062762.806778-477-88082205542057/AnsiballZ_selinux.py'
Nov 25 09:26:03 compute-0 sudo[35516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:03 compute-0 python3.9[35518]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 09:26:03 compute-0 sudo[35516]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:04 compute-0 sudo[35668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afxzpgyivuavrumxgwlzvavqewfylvll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062763.8992648-510-88246136764375/AnsiballZ_command.py'
Nov 25 09:26:04 compute-0 sudo[35668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:04 compute-0 python3.9[35670]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 09:26:04 compute-0 sudo[35668]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:05 compute-0 sudo[35821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqddrwgjhexolwudfzexdkucpmdigzne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062764.9941874-534-37135487911605/AnsiballZ_file.py'
Nov 25 09:26:05 compute-0 sudo[35821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:05 compute-0 python3.9[35823]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:26:05 compute-0 sudo[35821]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:06 compute-0 sudo[35973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnscugcwtskxpsuosixhhpbeiyrjnthu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062766.1213849-558-133602774409386/AnsiballZ_mount.py'
Nov 25 09:26:06 compute-0 sudo[35973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:06 compute-0 python3.9[35975]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 09:26:06 compute-0 sudo[35973]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:07 compute-0 sudo[36125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xazcgosmafawdpulzgcvdpjvwjdzpyhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062767.5331712-642-10945757111367/AnsiballZ_file.py'
Nov 25 09:26:07 compute-0 sudo[36125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:07 compute-0 python3.9[36127]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:26:07 compute-0 sudo[36125]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:08 compute-0 sudo[36277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzqywygaawiqsmcnidauedpthyaulvnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062768.0557516-666-105571163220979/AnsiballZ_stat.py'
Nov 25 09:26:08 compute-0 sudo[36277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:11 compute-0 python3.9[36279]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:26:11 compute-0 sudo[36277]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:11 compute-0 sudo[36400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcaaegkhbdjtrdvisuluktxzzxppbzox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062768.0557516-666-105571163220979/AnsiballZ_copy.py'
Nov 25 09:26:11 compute-0 sudo[36400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:11 compute-0 python3.9[36402]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062768.0557516-666-105571163220979/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:26:11 compute-0 sudo[36400]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:12 compute-0 sudo[36552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgwyxfblisbecnywvlxlmbwjtpequprh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062772.114846-738-148263085591296/AnsiballZ_stat.py'
Nov 25 09:26:12 compute-0 sudo[36552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:12 compute-0 python3.9[36554]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:26:12 compute-0 sudo[36552]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:13 compute-0 sudo[36704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftmjoqswmefrrfhuhuhhjxnhofssdcne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062772.9850447-762-212395707391491/AnsiballZ_command.py'
Nov 25 09:26:13 compute-0 sudo[36704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:13 compute-0 python3.9[36706]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:13 compute-0 sudo[36704]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:13 compute-0 sudo[36857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwoqbovqltdelsfgyarjtugfykprhmvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062773.5135937-786-152575993726377/AnsiballZ_file.py'
Nov 25 09:26:13 compute-0 sudo[36857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:13 compute-0 python3.9[36859]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:26:13 compute-0 sudo[36857]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:14 compute-0 sudo[37009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cobioshlrzyknrvilncvqtybvvwivyzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062774.2832193-819-57311399858310/AnsiballZ_getent.py'
Nov 25 09:26:14 compute-0 sudo[37009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:14 compute-0 python3.9[37011]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 09:26:14 compute-0 sudo[37009]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:15 compute-0 sudo[37162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-antgoubjfqnagbilfmeelnkhsbyzcbmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062774.8743-843-254650408943974/AnsiballZ_group.py'
Nov 25 09:26:15 compute-0 sudo[37162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:15 compute-0 python3.9[37164]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 09:26:15 compute-0 groupadd[37165]: group added to /etc/group: name=qemu, GID=107
Nov 25 09:26:15 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:26:15 compute-0 groupadd[37165]: group added to /etc/gshadow: name=qemu
Nov 25 09:26:15 compute-0 groupadd[37165]: new group: name=qemu, GID=107
Nov 25 09:26:15 compute-0 sudo[37162]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:15 compute-0 sudo[37321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hakctvhcohhasrecnaolveotljiomyfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062775.58961-867-44751230883543/AnsiballZ_user.py'
Nov 25 09:26:15 compute-0 sudo[37321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:16 compute-0 python3.9[37323]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 09:26:16 compute-0 useradd[37325]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 09:26:16 compute-0 sudo[37321]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:16 compute-0 sudo[37481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojmzagvvefkdrdrefvvtggqzcglrrkvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062776.3312798-891-188510484372829/AnsiballZ_getent.py'
Nov 25 09:26:16 compute-0 sudo[37481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:16 compute-0 python3.9[37483]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 09:26:16 compute-0 sudo[37481]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:16 compute-0 sudo[37634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlnqnzerpcqzcgzvsonxzxaivzfeyhqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062776.8242235-915-9604543772215/AnsiballZ_group.py'
Nov 25 09:26:16 compute-0 sudo[37634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:17 compute-0 python3.9[37636]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 09:26:17 compute-0 groupadd[37637]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 25 09:26:17 compute-0 groupadd[37637]: group added to /etc/gshadow: name=hugetlbfs
Nov 25 09:26:17 compute-0 groupadd[37637]: new group: name=hugetlbfs, GID=42477
Nov 25 09:26:17 compute-0 sudo[37634]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:17 compute-0 sudo[37792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwbjbdmdmwlderqfctpbnhnjzeceiarr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062777.4085436-942-125122919938303/AnsiballZ_file.py'
Nov 25 09:26:17 compute-0 sudo[37792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:17 compute-0 python3.9[37794]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 09:26:17 compute-0 sudo[37792]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:18 compute-0 sudo[37944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhohooaymmgbwfxzwhqocmzwgfpenfrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062778.1683552-975-174173040235089/AnsiballZ_dnf.py'
Nov 25 09:26:18 compute-0 sudo[37944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:18 compute-0 python3.9[37946]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:26:19 compute-0 sudo[37944]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:20 compute-0 sudo[38097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgeqhhwchcqucplbqoyiwcdppeqmjvbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062779.9641337-999-2122581336282/AnsiballZ_file.py'
Nov 25 09:26:20 compute-0 sudo[38097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:20 compute-0 python3.9[38099]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:26:20 compute-0 sudo[38097]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:20 compute-0 sudo[38249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvsupksnzqbneabusyegqoetkupnhjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062780.4646087-1023-279210629892962/AnsiballZ_stat.py'
Nov 25 09:26:20 compute-0 sudo[38249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:20 compute-0 python3.9[38251]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:26:20 compute-0 sudo[38249]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:21 compute-0 sudo[38372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgkurcmaicvzxnlgtlgtjfvflmakkovu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062780.4646087-1023-279210629892962/AnsiballZ_copy.py'
Nov 25 09:26:21 compute-0 sudo[38372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:21 compute-0 python3.9[38374]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062780.4646087-1023-279210629892962/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:26:21 compute-0 sudo[38372]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:21 compute-0 sudo[38524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oglfjtijygvzkielorcmzpwryiyeobbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062781.3591287-1068-264723236852562/AnsiballZ_systemd.py'
Nov 25 09:26:21 compute-0 sudo[38524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:22 compute-0 python3.9[38526]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:26:22 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 09:26:22 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 25 09:26:22 compute-0 kernel: Bridge firewalling registered
Nov 25 09:26:22 compute-0 systemd-modules-load[38530]: Inserted module 'br_netfilter'
Nov 25 09:26:22 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 09:26:22 compute-0 sudo[38524]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:22 compute-0 sudo[38683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crgylaadvfcsmxsqbskcxiechfzlrmts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062782.242545-1092-264093825181004/AnsiballZ_stat.py'
Nov 25 09:26:22 compute-0 sudo[38683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:22 compute-0 python3.9[38685]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:26:22 compute-0 sudo[38683]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:22 compute-0 sudo[38806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmhkxdychtsltjlykfgtquxujomnyyzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062782.242545-1092-264093825181004/AnsiballZ_copy.py'
Nov 25 09:26:22 compute-0 sudo[38806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:22 compute-0 python3.9[38808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062782.242545-1092-264093825181004/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:26:22 compute-0 sudo[38806]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:23 compute-0 sudo[38958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwapwqilhqvrifenmxuyswhagavvmzho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062783.3614295-1146-10042051400954/AnsiballZ_dnf.py'
Nov 25 09:26:23 compute-0 sudo[38958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:23 compute-0 python3.9[38960]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:26:28 compute-0 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Nov 25 09:26:28 compute-0 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Nov 25 09:26:28 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:26:28 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:26:28 compute-0 systemd[1]: Reloading.
Nov 25 09:26:28 compute-0 systemd-rc-local-generator[39017]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:26:28 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:26:29 compute-0 sudo[38958]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:31 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:26:31 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:26:31 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.905s CPU time.
Nov 25 09:26:31 compute-0 systemd[1]: run-r2306be3a27f34ae99fc5e62c7bffcdfb.service: Deactivated successfully.
Nov 25 09:26:31 compute-0 python3.9[42674]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:26:31 compute-0 python3.9[42826]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 09:26:32 compute-0 python3.9[42976]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:26:32 compute-0 sudo[43126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anwynebtcyclceixrklklnzwmwjqiepy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062792.7808433-1263-102861870824487/AnsiballZ_command.py'
Nov 25 09:26:32 compute-0 sudo[43126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:33 compute-0 python3.9[43128]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:33 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 09:26:33 compute-0 systemd[1]: Starting Authorization Manager...
Nov 25 09:26:33 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 09:26:33 compute-0 polkitd[43345]: Started polkitd version 0.117
Nov 25 09:26:33 compute-0 polkitd[43345]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 09:26:33 compute-0 polkitd[43345]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 09:26:33 compute-0 polkitd[43345]: Finished loading, compiling and executing 2 rules
Nov 25 09:26:33 compute-0 systemd[1]: Started Authorization Manager.
Nov 25 09:26:33 compute-0 polkitd[43345]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 25 09:26:33 compute-0 sudo[43126]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:33 compute-0 sudo[43509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-senysqhtvsvlfxwhsjalvkxbbiglmszd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062793.7935293-1290-73613344678437/AnsiballZ_systemd.py'
Nov 25 09:26:33 compute-0 sudo[43509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:34 compute-0 python3.9[43511]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:26:34 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 09:26:34 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 09:26:34 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 09:26:34 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 09:26:34 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 09:26:34 compute-0 sudo[43509]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:35 compute-0 python3.9[43673]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 09:26:38 compute-0 sudo[43823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzzzyqwsinmnbfkznixmkjzphqwidnly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062797.9043598-1461-92114523272011/AnsiballZ_systemd.py'
Nov 25 09:26:38 compute-0 sudo[43823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:38 compute-0 python3.9[43825]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:26:38 compute-0 systemd[1]: Reloading.
Nov 25 09:26:38 compute-0 systemd-rc-local-generator[43848]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:26:38 compute-0 sudo[43823]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:38 compute-0 sudo[44011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgqjedxjhorgjeivetchuezdpyeslyjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062798.63386-1461-70130182774893/AnsiballZ_systemd.py'
Nov 25 09:26:38 compute-0 sudo[44011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:39 compute-0 python3.9[44013]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:26:39 compute-0 systemd[1]: Reloading.
Nov 25 09:26:39 compute-0 systemd-rc-local-generator[44035]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:26:39 compute-0 systemd[1]: Starting dnf makecache...
Nov 25 09:26:39 compute-0 sudo[44011]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:39 compute-0 dnf[44051]: Failed determining last makecache time.
Nov 25 09:26:39 compute-0 dnf[44051]: delorean-openstack-barbican-42b4c41831408a8e323  21 kB/s | 3.0 kB     00:00
Nov 25 09:26:39 compute-0 dnf[44051]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7  21 kB/s | 3.0 kB     00:00
Nov 25 09:26:39 compute-0 sudo[44204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tewfxbyppsireschufmfvhbuxihudqwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062799.592116-1509-238959644304469/AnsiballZ_command.py'
Nov 25 09:26:39 compute-0 sudo[44204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:39 compute-0 dnf[44051]: delorean-openstack-cinder-1c00d6490d88e436f26ef  21 kB/s | 3.0 kB     00:00
Nov 25 09:26:39 compute-0 python3.9[44206]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:39 compute-0 sudo[44204]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:39 compute-0 dnf[44051]: delorean-python-stevedore-c4acc5639fd2329372142  20 kB/s | 3.0 kB     00:00
Nov 25 09:26:40 compute-0 dnf[44051]: delorean-python-observabilityclient-2f31846d73c  22 kB/s | 3.0 kB     00:00
Nov 25 09:26:40 compute-0 sudo[44360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxqvwaisglqlbaliesqvjcrdttvnrdjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062800.1026955-1533-263863021098639/AnsiballZ_command.py'
Nov 25 09:26:40 compute-0 dnf[44051]: delorean-os-net-config-bbae2ed8a159b0435a473f38  22 kB/s | 3.0 kB     00:00
Nov 25 09:26:40 compute-0 sudo[44360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:40 compute-0 dnf[44051]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6  22 kB/s | 3.0 kB     00:00
Nov 25 09:26:40 compute-0 python3.9[44362]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:40 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 25 09:26:40 compute-0 sudo[44360]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:40 compute-0 dnf[44051]: delorean-python-designate-tests-tempest-347fdbc  23 kB/s | 3.0 kB     00:00
Nov 25 09:26:40 compute-0 dnf[44051]: delorean-openstack-glance-1fd12c29b339f30fe823e  22 kB/s | 3.0 kB     00:00
Nov 25 09:26:40 compute-0 sudo[44517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxewsdztyzhmvwyxkodjqhgydaijfptr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062800.6083236-1557-91334790813354/AnsiballZ_command.py'
Nov 25 09:26:40 compute-0 sudo[44517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:40 compute-0 dnf[44051]: delorean-openstack-keystone-e4b40af0ae3698fbbbb  21 kB/s | 3.0 kB     00:00
Nov 25 09:26:40 compute-0 python3.9[44519]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:40 compute-0 dnf[44051]: delorean-openstack-manila-3c01b7181572c95dac462  21 kB/s | 3.0 kB     00:00
Nov 25 09:26:41 compute-0 dnf[44051]: delorean-python-whitebox-neutron-tests-tempest-  21 kB/s | 3.0 kB     00:00
Nov 25 09:26:41 compute-0 dnf[44051]: delorean-openstack-octavia-ba397f07a7331190208c  23 kB/s | 3.0 kB     00:00
Nov 25 09:26:41 compute-0 dnf[44051]: delorean-openstack-watcher-c014f81a8647287f6dcc  22 kB/s | 3.0 kB     00:00
Nov 25 09:26:41 compute-0 dnf[44051]: delorean-python-tcib-1124124ec06aadbac34f0d340b  22 kB/s | 3.0 kB     00:00
Nov 25 09:26:41 compute-0 dnf[44051]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158  21 kB/s | 3.0 kB     00:00
Nov 25 09:26:41 compute-0 dnf[44051]: delorean-openstack-swift-dc98a8463506ac520c469a  22 kB/s | 3.0 kB     00:00
Nov 25 09:26:41 compute-0 dnf[44051]: delorean-python-tempestconf-8515371b7cceebd4282  20 kB/s | 3.0 kB     00:00
Nov 25 09:26:41 compute-0 sudo[44517]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:42 compute-0 dnf[44051]: delorean-openstack-heat-ui-013accbfd179753bc3f0  23 kB/s | 3.0 kB     00:00
Nov 25 09:26:42 compute-0 sudo[44689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikgwqaxrjhkqjpsveclygfuzdhttgjzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062802.1491883-1581-171415253321315/AnsiballZ_command.py'
Nov 25 09:26:42 compute-0 sudo[44689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:42 compute-0 python3.9[44691]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:26:42 compute-0 sudo[44689]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:42 compute-0 sudo[44842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgzolteybgwfrvffedbunngejoomdqvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062802.6521199-1605-100774568979016/AnsiballZ_systemd.py'
Nov 25 09:26:42 compute-0 sudo[44842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:43 compute-0 python3.9[44844]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:26:43 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 09:26:43 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 09:26:43 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 25 09:26:43 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 25 09:26:43 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 09:26:43 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 25 09:26:43 compute-0 sudo[44842]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:43 compute-0 dnf[44051]: CentOS Stream 9 - BaseOS                        4.0 kB/s | 5.4 kB     00:01
Nov 25 09:26:43 compute-0 sshd-session[31276]: Connection closed by 192.168.122.30 port 41714
Nov 25 09:26:43 compute-0 sshd-session[31273]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:26:43 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 25 09:26:43 compute-0 systemd[1]: session-9.scope: Consumed 1min 34.363s CPU time.
Nov 25 09:26:43 compute-0 systemd-logind[744]: Session 9 logged out. Waiting for processes to exit.
Nov 25 09:26:43 compute-0 systemd-logind[744]: Removed session 9.
Nov 25 09:26:43 compute-0 dnf[44051]: CentOS Stream 9 - AppStream                      18 kB/s | 6.1 kB     00:00
Nov 25 09:26:44 compute-0 dnf[44051]: CentOS Stream 9 - CRB                            15 kB/s | 5.3 kB     00:00
Nov 25 09:26:45 compute-0 dnf[44051]: CentOS Stream 9 - Extras packages               8.7 kB/s | 8.3 kB     00:00
Nov 25 09:26:45 compute-0 dnf[44051]: dlrn-antelope-testing                            22 kB/s | 3.0 kB     00:00
Nov 25 09:26:45 compute-0 dnf[44051]: dlrn-antelope-build-deps                         22 kB/s | 3.0 kB     00:00
Nov 25 09:26:46 compute-0 dnf[44051]: centos9-rabbitmq                                2.1 kB/s | 3.0 kB     00:01
Nov 25 09:26:47 compute-0 dnf[44051]: centos9-storage                                 5.1 kB/s | 3.0 kB     00:00
Nov 25 09:26:47 compute-0 dnf[44051]: centos9-opstools                                6.9 kB/s | 3.0 kB     00:00
Nov 25 09:26:48 compute-0 dnf[44051]: NFV SIG OpenvSwitch                             6.3 kB/s | 3.0 kB     00:00
Nov 25 09:26:49 compute-0 sshd-session[44888]: Accepted publickey for zuul from 192.168.122.30 port 37196 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:26:49 compute-0 systemd-logind[744]: New session 10 of user zuul.
Nov 25 09:26:49 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 25 09:26:49 compute-0 sshd-session[44888]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:26:50 compute-0 python3.9[45041]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:26:50 compute-0 dnf[44051]: repo-setup-centos-appstream                     2.3 kB/s | 4.4 kB     00:01
Nov 25 09:26:51 compute-0 sudo[45198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkrrpwqmadasbmpnwtyhaupjtyolhwum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062810.7027736-68-177860207064481/AnsiballZ_getent.py'
Nov 25 09:26:51 compute-0 sudo[45198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:51 compute-0 python3.9[45200]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 09:26:51 compute-0 sudo[45198]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:51 compute-0 sudo[45351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rflrrwfwigairharhljqgqwifwuetxie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062811.3717773-92-68307170092280/AnsiballZ_group.py'
Nov 25 09:26:51 compute-0 sudo[45351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:51 compute-0 dnf[44051]: repo-setup-centos-baseos                        3.2 kB/s | 3.9 kB     00:01
Nov 25 09:26:51 compute-0 python3.9[45353]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 09:26:51 compute-0 groupadd[45355]: group added to /etc/group: name=openvswitch, GID=42476
Nov 25 09:26:51 compute-0 groupadd[45355]: group added to /etc/gshadow: name=openvswitch
Nov 25 09:26:51 compute-0 groupadd[45355]: new group: name=openvswitch, GID=42476
Nov 25 09:26:51 compute-0 sudo[45351]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:52 compute-0 sudo[45510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oljkfvwbmcnkxkmzvujlwtsnnavhfjeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062812.039007-116-230369417827236/AnsiballZ_user.py'
Nov 25 09:26:52 compute-0 sudo[45510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:52 compute-0 python3.9[45512]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 09:26:52 compute-0 useradd[45514]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 09:26:52 compute-0 useradd[45514]: add 'openvswitch' to group 'hugetlbfs'
Nov 25 09:26:52 compute-0 useradd[45514]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 25 09:26:52 compute-0 sudo[45510]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:53 compute-0 sudo[45671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtsbsxcjhzhvwyktnwjicdycvlvsxxbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062812.954481-146-84576468580673/AnsiballZ_setup.py'
Nov 25 09:26:53 compute-0 sudo[45671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:53 compute-0 dnf[44051]: repo-setup-centos-highavailability              2.7 kB/s | 3.9 kB     00:01
Nov 25 09:26:53 compute-0 python3.9[45673]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:26:53 compute-0 sudo[45671]: pam_unix(sudo:session): session closed for user root
Nov 25 09:26:53 compute-0 dnf[44051]: repo-setup-centos-powertools                     10 kB/s | 4.3 kB     00:00
Nov 25 09:26:53 compute-0 sudo[45758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckhcvqaofdluqatpvqzfhrqlahpdnhnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062812.954481-146-84576468580673/AnsiballZ_dnf.py'
Nov 25 09:26:53 compute-0 sudo[45758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:26:54 compute-0 python3.9[45760]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 09:26:54 compute-0 dnf[44051]: Extra Packages for Enterprise Linux 9 - x86_64   69 kB/s |  31 kB     00:00
Nov 25 09:26:54 compute-0 dnf[44051]: Metadata cache created.
Nov 25 09:26:54 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 09:26:54 compute-0 systemd[1]: Finished dnf makecache.
Nov 25 09:26:54 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.322s CPU time.
Nov 25 09:27:32 compute-0 sudo[45758]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:32 compute-0 sudo[45923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alfivivuigbwrojoquaipblbsdmyotoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062852.5339215-188-69341543327426/AnsiballZ_dnf.py'
Nov 25 09:27:32 compute-0 sudo[45923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:32 compute-0 python3.9[45925]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:27:41 compute-0 kernel: SELinux:  Converting 2729 SID table entries...
Nov 25 09:27:41 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:27:41 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:27:41 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:27:41 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:27:41 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:27:41 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:27:41 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:27:41 compute-0 groupadd[45948]: group added to /etc/group: name=unbound, GID=993
Nov 25 09:27:41 compute-0 groupadd[45948]: group added to /etc/gshadow: name=unbound
Nov 25 09:27:41 compute-0 groupadd[45948]: new group: name=unbound, GID=993
Nov 25 09:27:41 compute-0 useradd[45955]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 25 09:27:41 compute-0 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 25 09:27:41 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 25 09:27:42 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:27:42 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:27:42 compute-0 systemd[1]: Reloading.
Nov 25 09:27:42 compute-0 systemd-rc-local-generator[46447]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:27:42 compute-0 systemd-sysv-generator[46456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:27:42 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:27:42 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:27:42 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:27:42 compute-0 systemd[1]: run-r46b288da5db5460eb1a9b1c58e191818.service: Deactivated successfully.
Nov 25 09:27:42 compute-0 sudo[45923]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:43 compute-0 sudo[47021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aucbeoxyuejkgchvkvluwiaisgfwuxal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062862.7416086-212-60215537490129/AnsiballZ_systemd.py'
Nov 25 09:27:43 compute-0 sudo[47021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:43 compute-0 python3.9[47023]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:27:43 compute-0 systemd[1]: Reloading.
Nov 25 09:27:43 compute-0 systemd-rc-local-generator[47047]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:27:43 compute-0 systemd-sysv-generator[47051]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:27:43 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 25 09:27:43 compute-0 chown[47066]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 25 09:27:43 compute-0 ovs-ctl[47071]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 25 09:27:43 compute-0 ovs-ctl[47071]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 25 09:27:43 compute-0 ovs-ctl[47071]: Starting ovsdb-server [  OK  ]
Nov 25 09:27:43 compute-0 ovs-vsctl[47120]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 25 09:27:43 compute-0 ovs-vsctl[47140]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"a23dd616-1012-4f28-8d7d-927fdaae5f69\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 25 09:27:43 compute-0 ovs-ctl[47071]: Configuring Open vSwitch system IDs [  OK  ]
Nov 25 09:27:43 compute-0 ovs-ctl[47071]: Enabling remote OVSDB managers [  OK  ]
Nov 25 09:27:43 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 25 09:27:43 compute-0 ovs-vsctl[47146]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 09:27:43 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 25 09:27:43 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 25 09:27:43 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 25 09:27:43 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 25 09:27:43 compute-0 ovs-ctl[47190]: Inserting openvswitch module [  OK  ]
Nov 25 09:27:43 compute-0 ovs-ctl[47159]: Starting ovs-vswitchd [  OK  ]
Nov 25 09:27:43 compute-0 ovs-ctl[47159]: Enabling remote OVSDB managers [  OK  ]
Nov 25 09:27:43 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 25 09:27:43 compute-0 systemd[1]: Starting Open vSwitch...
Nov 25 09:27:43 compute-0 systemd[1]: Finished Open vSwitch.
Nov 25 09:27:43 compute-0 sudo[47021]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:43 compute-0 ovs-vsctl[47209]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 09:27:44 compute-0 python3.9[47359]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:27:45 compute-0 sudo[47509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orxuhlsikzqtwxuocvmmxzakknrwiqbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062864.8485832-266-64069624008723/AnsiballZ_sefcontext.py'
Nov 25 09:27:45 compute-0 sudo[47509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:45 compute-0 python3.9[47511]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 09:27:46 compute-0 kernel: SELinux:  Converting 2743 SID table entries...
Nov 25 09:27:46 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:27:46 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:27:46 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:27:46 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:27:46 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:27:46 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:27:46 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:27:46 compute-0 sudo[47509]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:46 compute-0 python3.9[47666]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:27:47 compute-0 sudo[47822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gejswjonwxffqcdpqvkivworieboiuqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062867.2802675-320-85059415816357/AnsiballZ_dnf.py'
Nov 25 09:27:47 compute-0 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 25 09:27:47 compute-0 sudo[47822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:47 compute-0 python3.9[47824]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:27:48 compute-0 sudo[47822]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:49 compute-0 sudo[47975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llbzfcecmaomdwiusmtxwzkpfcivsbga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062868.7875128-344-264905170551670/AnsiballZ_command.py'
Nov 25 09:27:49 compute-0 sudo[47975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:49 compute-0 python3.9[47977]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:27:49 compute-0 sudo[47975]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:50 compute-0 sudo[48262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsnfwxwhmxdpsgmlkjtyrsghfdsmjozv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062869.866869-368-51428085299740/AnsiballZ_file.py'
Nov 25 09:27:50 compute-0 sudo[48262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:50 compute-0 python3.9[48264]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 09:27:50 compute-0 sudo[48262]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:50 compute-0 python3.9[48414]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:27:51 compute-0 sudo[48566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikosdpoootubehdicnsiuygnpgroibnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062871.0407403-416-113304394819910/AnsiballZ_dnf.py'
Nov 25 09:27:51 compute-0 sudo[48566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:51 compute-0 python3.9[48568]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:27:54 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:27:54 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:27:54 compute-0 systemd[1]: Reloading.
Nov 25 09:27:54 compute-0 systemd-rc-local-generator[48600]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:27:54 compute-0 systemd-sysv-generator[48603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:27:54 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:27:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:27:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:27:54 compute-0 systemd[1]: run-rb8921a64be0c4c8bacd40d55c24ef955.service: Deactivated successfully.
Nov 25 09:27:54 compute-0 sudo[48566]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:54 compute-0 sudo[48883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsyeiurzesxfdcgludjhgdbwhidgphxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062874.7515423-440-277141015088693/AnsiballZ_systemd.py'
Nov 25 09:27:54 compute-0 sudo[48883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:55 compute-0 python3.9[48885]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:27:55 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 09:27:55 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 09:27:55 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 09:27:55 compute-0 NetworkManager[7262]: <info>  [1764062875.2055] caught SIGTERM, shutting down normally.
Nov 25 09:27:55 compute-0 NetworkManager[7262]: <info>  [1764062875.2062] dhcp4 (eth0): canceled DHCP transaction
Nov 25 09:27:55 compute-0 NetworkManager[7262]: <info>  [1764062875.2063] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:27:55 compute-0 NetworkManager[7262]: <info>  [1764062875.2063] dhcp4 (eth0): state changed no lease
Nov 25 09:27:55 compute-0 NetworkManager[7262]: <info>  [1764062875.2064] dhcp6 (eth0): canceled DHCP transaction
Nov 25 09:27:55 compute-0 NetworkManager[7262]: <info>  [1764062875.2064] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:27:55 compute-0 NetworkManager[7262]: <info>  [1764062875.2064] dhcp6 (eth0): state changed no lease
Nov 25 09:27:55 compute-0 NetworkManager[7262]: <info>  [1764062875.2065] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 09:27:55 compute-0 systemd[1]: Stopping Network Manager...
Nov 25 09:27:55 compute-0 NetworkManager[7262]: <info>  [1764062875.2093] exiting (success)
Nov 25 09:27:55 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:27:55 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:27:55 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 09:27:55 compute-0 systemd[1]: Stopped Network Manager.
Nov 25 09:27:55 compute-0 systemd[1]: Starting Network Manager...
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.2650] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:f8c189f2-455d-46a5-8a09-714641cd81d1)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.2650] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.2691] manager[0x557f9a5b5010]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 09:27:55 compute-0 systemd[1]: Starting Hostname Service...
Nov 25 09:27:55 compute-0 systemd[1]: Started Hostname Service.
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3287] hostname: hostname: using hostnamed
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3287] hostname: static hostname changed from (none) to "compute-0"
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3290] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3293] manager[0x557f9a5b5010]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3293] manager[0x557f9a5b5010]: rfkill: WWAN hardware radio set enabled
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3308] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3314] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3315] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3315] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3316] manager: Networking is enabled by state file
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3325] settings: Loaded settings plugin: keyfile (internal)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3327] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3345] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3355] dhcp: init: Using DHCP client 'internal'
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3357] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3360] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3365] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3370] device (lo): Activation: starting connection 'lo' (28ef2950-3e02-469b-b897-6f6f0a688c29)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3375] device (eth0): carrier: link connected
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3378] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3382] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3382] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3386] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3391] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3395] device (eth1): carrier: link connected
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3398] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3401] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (8d7be350-b956-5589-a7f6-ba574a72fbd9) (indicated)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3402] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3405] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3410] device (eth1): Activation: starting connection 'ci-private-network' (8d7be350-b956-5589-a7f6-ba574a72fbd9)
Nov 25 09:27:55 compute-0 systemd[1]: Started Network Manager.
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3418] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3422] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3424] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3424] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3426] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3427] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3428] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3430] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3432] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3435] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3436] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3438] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3443] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3445] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3451] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3466] dhcp4 (eth0): state changed new lease, address=192.168.26.109
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3474] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3496] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3497] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3498] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3503] device (lo): Activation: successful, device activated.
Nov 25 09:27:55 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3506] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3507] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 25 09:27:55 compute-0 NetworkManager[48903]: <info>  [1764062875.3509] device (eth1): Activation: successful, device activated.
Nov 25 09:27:55 compute-0 sudo[48883]: pam_unix(sudo:session): session closed for user root
Nov 25 09:27:55 compute-0 sudo[49092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmstvlddjycymkowpkhbhkwxdvsabrlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062875.5582368-464-50673335203368/AnsiballZ_dnf.py'
Nov 25 09:27:55 compute-0 sudo[49092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:27:55 compute-0 python3.9[49094]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:27:56 compute-0 NetworkManager[48903]: <info>  [1764062876.3591] dhcp6 (eth0): state changed new lease, address=2001:db8::1c1
Nov 25 09:27:56 compute-0 NetworkManager[48903]: <info>  [1764062876.3599] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 09:27:56 compute-0 NetworkManager[48903]: <info>  [1764062876.3629] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 09:27:56 compute-0 NetworkManager[48903]: <info>  [1764062876.3630] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 09:27:56 compute-0 NetworkManager[48903]: <info>  [1764062876.3632] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 09:27:56 compute-0 NetworkManager[48903]: <info>  [1764062876.3634] device (eth0): Activation: successful, device activated.
Nov 25 09:27:56 compute-0 NetworkManager[48903]: <info>  [1764062876.3637] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 09:27:56 compute-0 NetworkManager[48903]: <info>  [1764062876.3638] manager: startup complete
Nov 25 09:27:56 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 25 09:28:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:28:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:28:01 compute-0 systemd[1]: Reloading.
Nov 25 09:28:01 compute-0 systemd-sysv-generator[49164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:28:01 compute-0 systemd-rc-local-generator[49160]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:28:01 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:28:01 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:28:01 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:28:01 compute-0 systemd[1]: run-rfa98f7d2765f4612bcc2229b275799e3.service: Deactivated successfully.
Nov 25 09:28:02 compute-0 sudo[49092]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:04 compute-0 sudo[49571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxoijrgiaftzenkzfefhuletvezukcth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062884.0735085-500-112028366292453/AnsiballZ_stat.py'
Nov 25 09:28:04 compute-0 sudo[49571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:04 compute-0 python3.9[49573]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:28:04 compute-0 sudo[49571]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:05 compute-0 sudo[49723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-johwkfmojnpccclmywmxzaqiiywgdwnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062884.592774-527-173545787647967/AnsiballZ_ini_file.py'
Nov 25 09:28:05 compute-0 sudo[49723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:05 compute-0 python3.9[49725]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:05 compute-0 sudo[49723]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:05 compute-0 sudo[49877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkdjpiwkrrpidnihgoahhqurvrddcohu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062885.517369-557-223937956789687/AnsiballZ_ini_file.py'
Nov 25 09:28:05 compute-0 sudo[49877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:05 compute-0 python3.9[49879]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:05 compute-0 sudo[49877]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:06 compute-0 sudo[50029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elbfgdhkdckdhtjijaognlcchzdmakng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062885.962598-557-222538676770481/AnsiballZ_ini_file.py'
Nov 25 09:28:06 compute-0 sudo[50029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:06 compute-0 python3.9[50031]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:06 compute-0 sudo[50029]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:06 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:28:06 compute-0 sudo[50183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxmbybyteuhviuazerahadrdalqowbxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062886.4759686-602-54867400333715/AnsiballZ_ini_file.py'
Nov 25 09:28:06 compute-0 sudo[50183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:06 compute-0 python3.9[50185]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:06 compute-0 sudo[50183]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:07 compute-0 sudo[50335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfsetxhjnfdjewxjyveigyzpflftkwyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062886.9326942-602-100535571485445/AnsiballZ_ini_file.py'
Nov 25 09:28:07 compute-0 sudo[50335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:07 compute-0 python3.9[50337]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:07 compute-0 sudo[50335]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:07 compute-0 sudo[50487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blqhvfyqpmjhpawnxsrynvjajflglcdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062887.4683983-647-132464618192417/AnsiballZ_stat.py'
Nov 25 09:28:07 compute-0 sudo[50487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:07 compute-0 python3.9[50489]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:28:07 compute-0 sudo[50487]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:08 compute-0 sudo[50610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edrwutlmjnmybmhbqvmxutwcfazeeqnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062887.4683983-647-132464618192417/AnsiballZ_copy.py'
Nov 25 09:28:08 compute-0 sudo[50610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:08 compute-0 python3.9[50612]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062887.4683983-647-132464618192417/.source _original_basename=.rce6x186 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:08 compute-0 sudo[50610]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:08 compute-0 sudo[50762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaejvabayeqtzncqvcclfzkpekikoqlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062888.4812596-692-210855644423340/AnsiballZ_file.py'
Nov 25 09:28:08 compute-0 sudo[50762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:08 compute-0 python3.9[50764]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:08 compute-0 sudo[50762]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:09 compute-0 sudo[50914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzgnddvtcexgaswvdfbvkhfrxumrfmqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062888.9752746-716-123523383502205/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 25 09:28:09 compute-0 sudo[50914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:09 compute-0 python3.9[50916]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 25 09:28:09 compute-0 sudo[50914]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:10 compute-0 sudo[51066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiorgccuqdsbpylsnolpcjqptdfhooro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062889.8473969-743-139877261242613/AnsiballZ_file.py'
Nov 25 09:28:10 compute-0 sudo[51066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:10 compute-0 python3.9[51068]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:10 compute-0 sudo[51066]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:10 compute-0 sudo[51218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgxhuosfimfzppagtdjmdxomhtlqxomw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062890.483919-773-25825163404600/AnsiballZ_stat.py'
Nov 25 09:28:10 compute-0 sudo[51218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:10 compute-0 sudo[51218]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:11 compute-0 sudo[51341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruzvxrqlnixjniygstsrwteslzrhrdyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062890.483919-773-25825163404600/AnsiballZ_copy.py'
Nov 25 09:28:11 compute-0 sudo[51341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:11 compute-0 sudo[51341]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:11 compute-0 sudo[51493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsfscjzpoknkurqsgcwdxicbptpvldjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062891.356019-818-41346420702930/AnsiballZ_slurp.py'
Nov 25 09:28:11 compute-0 sudo[51493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:11 compute-0 python3.9[51495]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 25 09:28:11 compute-0 sudo[51493]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:12 compute-0 sudo[51668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wovwdtxavqhpscjckcjtngxjbeookavx ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062892.049708-845-130486959527067/async_wrapper.py j966309428224 300 /home/zuul/.ansible/tmp/ansible-tmp-1764062892.049708-845-130486959527067/AnsiballZ_edpm_os_net_config.py _'
Nov 25 09:28:12 compute-0 sudo[51668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:12 compute-0 ansible-async_wrapper.py[51670]: Invoked with j966309428224 300 /home/zuul/.ansible/tmp/ansible-tmp-1764062892.049708-845-130486959527067/AnsiballZ_edpm_os_net_config.py _
Nov 25 09:28:12 compute-0 ansible-async_wrapper.py[51673]: Starting module and watcher
Nov 25 09:28:12 compute-0 ansible-async_wrapper.py[51673]: Start watching 51674 (300)
Nov 25 09:28:12 compute-0 ansible-async_wrapper.py[51674]: Start module (51674)
Nov 25 09:28:12 compute-0 ansible-async_wrapper.py[51670]: Return async_wrapper task started.
Nov 25 09:28:12 compute-0 sudo[51668]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:12 compute-0 python3.9[51675]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 25 09:28:13 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 25 09:28:13 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 25 09:28:13 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 25 09:28:13 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 25 09:28:13 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1314] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1328] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1686] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1687] audit: op="connection-add" uuid="1d3d0572-a8af-4cb3-92b4-b7f38625b9a5" name="br-ex-br" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1697] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1699] audit: op="connection-add" uuid="37509732-df38-4bfc-98d5-bc5e2a281cbf" name="br-ex-port" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1707] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1708] audit: op="connection-add" uuid="2b4ac6db-10ed-430b-9637-21f0e1127d02" name="eth1-port" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1716] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1717] audit: op="connection-add" uuid="00ae4fa4-a06b-4b77-83b7-dac9b9f5e1ac" name="vlan20-port" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1727] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1728] audit: op="connection-add" uuid="3f86e5dc-f78e-4096-a1f2-8800bf897960" name="vlan21-port" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1740] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1741] audit: op="connection-add" uuid="5ab9aed2-1c81-49e6-a523-d46765976ae7" name="vlan22-port" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1749] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1751] audit: op="connection-add" uuid="ac738d90-91a6-4418-b1ea-6f3bcfd1422e" name="vlan23-port" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1766] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv6.dhcp-timeout,ipv6.may-fail,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,connection.timestamp" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1778] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1779] audit: op="connection-add" uuid="51e42bc3-192f-44a4-b8a7-9c89dba71eb1" name="br-ex-if" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1799] audit: op="connection-update" uuid="8d7be350-b956-5589-a7f6-ba574a72fbd9" name="ci-private-network" args="ipv6.addresses,ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode,ipv6.dns,ipv6.routes,ovs-external-ids.data,ipv4.addresses,ipv4.never-default,ipv4.method,ipv4.dns,ipv4.routes,ipv4.routing-rules,ovs-interface.type,connection.controller,connection.master,connection.port-type,connection.slave-type,connection.timestamp" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1811] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1812] audit: op="connection-add" uuid="b44e377e-548f-4afa-9cba-ce4401c7ecc5" name="vlan20-if" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1824] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1825] audit: op="connection-add" uuid="762a2c0c-0be3-494b-82ef-a9e3de9037b0" name="vlan21-if" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1837] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1839] audit: op="connection-add" uuid="b724bdf5-c70e-4b26-a0df-37cc62c27dbd" name="vlan22-if" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1850] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1852] audit: op="connection-add" uuid="4d931fbb-c9e7-417e-bbe3-4e0cd6cce3ae" name="vlan23-if" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1860] audit: op="connection-delete" uuid="4dea2d72-efea-3ded-bb5c-4e572717d306" name="Wired connection 1" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1868] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1875] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1878] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (1d3d0572-a8af-4cb3-92b4-b7f38625b9a5)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1878] audit: op="connection-activate" uuid="1d3d0572-a8af-4cb3-92b4-b7f38625b9a5" name="br-ex-br" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1880] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1884] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1887] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (37509732-df38-4bfc-98d5-bc5e2a281cbf)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1889] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1893] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1896] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (2b4ac6db-10ed-430b-9637-21f0e1127d02)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1897] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1902] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1905] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (00ae4fa4-a06b-4b77-83b7-dac9b9f5e1ac)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1907] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1912] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1915] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (3f86e5dc-f78e-4096-a1f2-8800bf897960)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1916] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1921] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1924] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (5ab9aed2-1c81-49e6-a523-d46765976ae7)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1925] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1931] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1933] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (ac738d90-91a6-4418-b1ea-6f3bcfd1422e)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1934] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1936] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1937] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1942] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1945] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1949] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (51e42bc3-192f-44a4-b8a7-9c89dba71eb1)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1950] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1952] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1953] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1954] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1955] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1963] device (eth1): disconnecting for new activation request.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1963] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1965] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1967] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1967] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1969] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1973] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1975] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (b44e377e-548f-4afa-9cba-ce4401c7ecc5)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1976] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1978] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1979] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1980] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1982] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1985] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1988] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (762a2c0c-0be3-494b-82ef-a9e3de9037b0)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1989] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1991] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1992] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1992] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1994] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.1998] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2001] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (b724bdf5-c70e-4b26-a0df-37cc62c27dbd)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2002] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2004] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2005] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2006] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2008] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2011] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2014] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (4d931fbb-c9e7-417e-bbe3-4e0cd6cce3ae)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2015] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2017] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2018] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2019] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2020] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2029] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv6.may-fail,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2031] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2034] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2035] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2040] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2045] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2057] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2060] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2062] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 kernel: Timeout policy base is empty
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2072] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2075] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2077] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2078] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2082] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2085] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2088] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 systemd-udevd[51682]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2089] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2092] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2095] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2098] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2099] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2102] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2105] dhcp4 (eth0): canceled DHCP transaction
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2106] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2106] dhcp4 (eth0): state changed no lease
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2106] dhcp6 (eth0): canceled DHCP transaction
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2106] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2106] dhcp6 (eth0): state changed no lease
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2110] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2118] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2121] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51676 uid=0 result="fail" reason="Device is not activated"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2124] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2129] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2142] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2146] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2149] dhcp4 (eth0): state changed new lease, address=192.168.26.109
Nov 25 09:28:14 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2207] device (eth1): disconnecting for new activation request.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2208] audit: op="connection-activate" uuid="8d7be350-b956-5589-a7f6-ba574a72fbd9" name="ci-private-network" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2239] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2313] device (eth1): Activation: starting connection 'ci-private-network' (8d7be350-b956-5589-a7f6-ba574a72fbd9)
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2316] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2317] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51676 uid=0 result="success"
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2321] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2324] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2327] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2330] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2333] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2334] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2335] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2336] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2337] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2337] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2340] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2344] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2347] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2349] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2352] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2354] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2357] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2359] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2362] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2364] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2367] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2370] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2373] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2375] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2378] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2398] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2400] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2403] device (eth1): Activation: successful, device activated.
Nov 25 09:28:14 compute-0 kernel: br-ex: entered promiscuous mode
Nov 25 09:28:14 compute-0 kernel: vlan22: entered promiscuous mode
Nov 25 09:28:14 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 25 09:28:14 compute-0 systemd-udevd[51680]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2525] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2538] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 kernel: vlan23: entered promiscuous mode
Nov 25 09:28:14 compute-0 kernel: vlan20: entered promiscuous mode
Nov 25 09:28:14 compute-0 kernel: vlan21: entered promiscuous mode
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2790] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2796] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2800] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2804] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2809] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2811] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2815] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2842] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2847] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2854] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2860] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2872] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2873] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2874] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2876] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2878] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2883] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2887] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2893] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2897] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2902] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2907] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:28:14 compute-0 NetworkManager[48903]: <info>  [1764062894.2912] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 09:28:15 compute-0 NetworkManager[48903]: <info>  [1764062895.3783] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51676 uid=0 result="success"
Nov 25 09:28:15 compute-0 NetworkManager[48903]: <info>  [1764062895.4933] checkpoint[0x557f9a58c950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 25 09:28:15 compute-0 NetworkManager[48903]: <info>  [1764062895.4934] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51676 uid=0 result="success"
Nov 25 09:28:15 compute-0 NetworkManager[48903]: <info>  [1764062895.6117] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51676 uid=0 result="success"
Nov 25 09:28:15 compute-0 NetworkManager[48903]: <info>  [1764062895.6127] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51676 uid=0 result="success"
Nov 25 09:28:15 compute-0 NetworkManager[48903]: <info>  [1764062895.7796] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51676 uid=0 result="success"
Nov 25 09:28:15 compute-0 NetworkManager[48903]: <info>  [1764062895.9029] checkpoint[0x557f9a58ca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 25 09:28:15 compute-0 NetworkManager[48903]: <info>  [1764062895.9033] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51676 uid=0 result="success"
Nov 25 09:28:16 compute-0 NetworkManager[48903]: <info>  [1764062896.1358] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51676 uid=0 result="success"
Nov 25 09:28:16 compute-0 NetworkManager[48903]: <info>  [1764062896.1366] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51676 uid=0 result="success"
Nov 25 09:28:16 compute-0 sudo[52029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzywffduhkzsgpzaqeiksmbxevdnhqmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062895.8527935-845-86690277988398/AnsiballZ_async_status.py'
Nov 25 09:28:16 compute-0 sudo[52029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:16 compute-0 NetworkManager[48903]: <info>  [1764062896.2946] audit: op="networking-control" arg="global-dns-configuration" pid=51676 uid=0 result="success"
Nov 25 09:28:16 compute-0 NetworkManager[48903]: <info>  [1764062896.2961] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf)
Nov 25 09:28:16 compute-0 NetworkManager[48903]: <info>  [1764062896.2965] audit: op="networking-control" arg="global-dns-configuration" pid=51676 uid=0 result="success"
Nov 25 09:28:16 compute-0 NetworkManager[48903]: <info>  [1764062896.3013] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51676 uid=0 result="success"
Nov 25 09:28:16 compute-0 python3.9[52031]: ansible-ansible.legacy.async_status Invoked with jid=j966309428224.51670 mode=status _async_dir=/root/.ansible_async
Nov 25 09:28:16 compute-0 NetworkManager[48903]: <info>  [1764062896.4127] checkpoint[0x557f9a58caf0]: destroy /org/freedesktop/NetworkManager/Checkpoint/3
Nov 25 09:28:16 compute-0 NetworkManager[48903]: <info>  [1764062896.4131] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51676 uid=0 result="success"
Nov 25 09:28:16 compute-0 sudo[52029]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:16 compute-0 ansible-async_wrapper.py[51674]: Module complete (51674)
Nov 25 09:28:17 compute-0 ansible-async_wrapper.py[51673]: Done in kid B.
Nov 25 09:28:19 compute-0 sudo[52134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvspzaxkgackehplnndhfujqooqamaqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062895.8527935-845-86690277988398/AnsiballZ_async_status.py'
Nov 25 09:28:19 compute-0 sudo[52134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:19 compute-0 python3.9[52136]: ansible-ansible.legacy.async_status Invoked with jid=j966309428224.51670 mode=status _async_dir=/root/.ansible_async
Nov 25 09:28:19 compute-0 sudo[52134]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:19 compute-0 sudo[52234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udqsaqtntouxjeqgdlrrvznebxmiphfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062895.8527935-845-86690277988398/AnsiballZ_async_status.py'
Nov 25 09:28:19 compute-0 sudo[52234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:20 compute-0 python3.9[52236]: ansible-ansible.legacy.async_status Invoked with jid=j966309428224.51670 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 09:28:20 compute-0 sudo[52234]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:20 compute-0 sudo[52386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxrkmtutiykidxhkrjkmuxyanobvevzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062900.3606353-926-15660491524627/AnsiballZ_stat.py'
Nov 25 09:28:20 compute-0 sudo[52386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:20 compute-0 python3.9[52388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:28:20 compute-0 sudo[52386]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:20 compute-0 sudo[52509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djpptfgqflojewryimdayurrmnvdualn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062900.3606353-926-15660491524627/AnsiballZ_copy.py'
Nov 25 09:28:20 compute-0 sudo[52509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:21 compute-0 python3.9[52511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062900.3606353-926-15660491524627/.source.returncode _original_basename=.x7xdojvv follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:21 compute-0 sudo[52509]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:21 compute-0 sudo[52661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvixmrngekpbjmxmuzfqvlmjtcfxjhqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062901.310991-974-34519229179669/AnsiballZ_stat.py'
Nov 25 09:28:21 compute-0 sudo[52661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:21 compute-0 python3.9[52663]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:28:21 compute-0 sudo[52661]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:21 compute-0 sudo[52784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rleiabbnrmxhdpmfdiovwudpyzoeqczj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062901.310991-974-34519229179669/AnsiballZ_copy.py'
Nov 25 09:28:21 compute-0 sudo[52784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:22 compute-0 python3.9[52786]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062901.310991-974-34519229179669/.source.cfg _original_basename=.bmwm6cfn follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:22 compute-0 sudo[52784]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:22 compute-0 sudo[52936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqirjaoadskhsjrcfhnqrvurvidykbja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062902.209468-1019-115484788073870/AnsiballZ_systemd.py'
Nov 25 09:28:22 compute-0 sudo[52936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:22 compute-0 python3.9[52938]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:28:22 compute-0 systemd[1]: Reloading Network Manager...
Nov 25 09:28:22 compute-0 NetworkManager[48903]: <info>  [1764062902.6916] audit: op="reload" arg="0" pid=52942 uid=0 result="success"
Nov 25 09:28:22 compute-0 NetworkManager[48903]: <info>  [1764062902.6921] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 25 09:28:22 compute-0 NetworkManager[48903]: <info>  [1764062902.6922] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 09:28:22 compute-0 systemd[1]: Reloaded Network Manager.
Nov 25 09:28:22 compute-0 sudo[52936]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:23 compute-0 sshd-session[44891]: Connection closed by 192.168.122.30 port 37196
Nov 25 09:28:23 compute-0 sshd-session[44888]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:28:23 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 25 09:28:23 compute-0 systemd[1]: session-10.scope: Consumed 34.898s CPU time.
Nov 25 09:28:23 compute-0 systemd-logind[744]: Session 10 logged out. Waiting for processes to exit.
Nov 25 09:28:23 compute-0 systemd-logind[744]: Removed session 10.
Nov 25 09:28:25 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 09:28:28 compute-0 sshd-session[52976]: Accepted publickey for zuul from 192.168.122.30 port 52892 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:28:28 compute-0 systemd-logind[744]: New session 11 of user zuul.
Nov 25 09:28:28 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 25 09:28:28 compute-0 sshd-session[52976]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:28:29 compute-0 python3.9[53129]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:28:29 compute-0 python3.9[53284]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:28:30 compute-0 python3.9[53477]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:28:31 compute-0 sshd-session[52979]: Connection closed by 192.168.122.30 port 52892
Nov 25 09:28:31 compute-0 sshd-session[52976]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:28:31 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 25 09:28:31 compute-0 systemd[1]: session-11.scope: Consumed 1.616s CPU time.
Nov 25 09:28:31 compute-0 systemd-logind[744]: Session 11 logged out. Waiting for processes to exit.
Nov 25 09:28:31 compute-0 systemd-logind[744]: Removed session 11.
Nov 25 09:28:32 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:28:36 compute-0 sshd-session[53506]: Accepted publickey for zuul from 192.168.122.30 port 38172 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:28:36 compute-0 systemd-logind[744]: New session 12 of user zuul.
Nov 25 09:28:36 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 25 09:28:36 compute-0 sshd-session[53506]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:28:37 compute-0 python3.9[53659]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:28:37 compute-0 python3.9[53813]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:28:38 compute-0 sudo[53967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfijqinwuezpdabjqwgupccyafixjwqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062918.1724715-80-202068320856534/AnsiballZ_setup.py'
Nov 25 09:28:38 compute-0 sudo[53967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:38 compute-0 python3.9[53969]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:28:38 compute-0 sudo[53967]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:39 compute-0 sudo[54052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-remtkjuqpcflttnxcnthbucwyqfixptr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062918.1724715-80-202068320856534/AnsiballZ_dnf.py'
Nov 25 09:28:39 compute-0 sudo[54052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:39 compute-0 python3.9[54054]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:28:40 compute-0 sudo[54052]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:40 compute-0 sudo[54205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfdpffqcggvnwejmtlmzzrhmshkgkmdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062920.4106863-116-148500309038467/AnsiballZ_setup.py'
Nov 25 09:28:40 compute-0 sudo[54205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:40 compute-0 python3.9[54207]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:28:40 compute-0 sudo[54205]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:41 compute-0 sudo[54400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfpssrfgsyfaeulbktjvtyfwactlyqmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062921.4186792-149-1941813868350/AnsiballZ_file.py'
Nov 25 09:28:41 compute-0 sudo[54400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:41 compute-0 python3.9[54402]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:41 compute-0 sudo[54400]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:42 compute-0 sudo[54553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbbtnetmltzieecnoczdsopbtophusmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062922.0173132-173-65471163523054/AnsiballZ_command.py'
Nov 25 09:28:42 compute-0 sudo[54553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:42 compute-0 python3.9[54555]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:28:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4006625146-merged.mount: Deactivated successfully.
Nov 25 09:28:42 compute-0 podman[54556]: 2025-11-25 09:28:42.512311313 +0000 UTC m=+0.025148949 system refresh
Nov 25 09:28:42 compute-0 sudo[54553]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:42 compute-0 sudo[54714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psyuxpstmnbeimnjovtrfshuwqvgkqif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062922.6776657-197-189997880514578/AnsiballZ_stat.py'
Nov 25 09:28:42 compute-0 sudo[54714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:43 compute-0 python3.9[54716]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:28:43 compute-0 sudo[54714]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:28:43 compute-0 sudo[54837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufgycqkdetyibqiuokezoynnhbedsgzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062922.6776657-197-189997880514578/AnsiballZ_copy.py'
Nov 25 09:28:43 compute-0 sudo[54837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:43 compute-0 python3.9[54839]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062922.6776657-197-189997880514578/.source.json follow=False _original_basename=podman_network_config.j2 checksum=c95a829f9b2a1a99178590c94f2f3031317a87cd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:43 compute-0 sudo[54837]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:44 compute-0 sudo[54989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftmbnyjeyywjkcjifcfoagyxwvnxbore ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062923.8247857-242-275770797100828/AnsiballZ_stat.py'
Nov 25 09:28:44 compute-0 sudo[54989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:44 compute-0 python3.9[54991]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:28:44 compute-0 sudo[54989]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:44 compute-0 sudo[55112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdgooghgwiqjrzecpfuxbzerskdlkpgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062923.8247857-242-275770797100828/AnsiballZ_copy.py'
Nov 25 09:28:44 compute-0 sudo[55112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:44 compute-0 python3.9[55114]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062923.8247857-242-275770797100828/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:28:44 compute-0 sudo[55112]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:45 compute-0 sudo[55264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfrqmhukpwntrrovxzmutcbfvaxhckhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062924.8104568-290-91093831793791/AnsiballZ_ini_file.py'
Nov 25 09:28:45 compute-0 sudo[55264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:45 compute-0 python3.9[55266]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:28:45 compute-0 sudo[55264]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:45 compute-0 sudo[55417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecgweroxpdmohhqashaqaduxtvrcnrmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062925.3840046-290-224004171995993/AnsiballZ_ini_file.py'
Nov 25 09:28:45 compute-0 sudo[55417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:45 compute-0 python3.9[55419]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:28:45 compute-0 sudo[55417]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:45 compute-0 sudo[55569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sokinhyxoqqydpcwjhqherbrsxmistrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062925.8124862-290-96455222910340/AnsiballZ_ini_file.py'
Nov 25 09:28:45 compute-0 sudo[55569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:46 compute-0 python3.9[55571]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:28:46 compute-0 sudo[55569]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:46 compute-0 sudo[55721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnlsvbytyxsfsbkscmiuvqskhcmvfluq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062926.25044-290-81321055014593/AnsiballZ_ini_file.py'
Nov 25 09:28:46 compute-0 sudo[55721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:46 compute-0 python3.9[55723]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:28:46 compute-0 sudo[55721]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:47 compute-0 sudo[55873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmyktytzvsqwjeirhrayfhjerozkvyte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062926.864216-383-81583139319623/AnsiballZ_dnf.py'
Nov 25 09:28:47 compute-0 sudo[55873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:47 compute-0 python3.9[55875]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:28:48 compute-0 sudo[55873]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:48 compute-0 sudo[56026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxihbbjljftmaisppgznzkqdopatrsqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062928.7127345-416-65541816061109/AnsiballZ_setup.py'
Nov 25 09:28:48 compute-0 sudo[56026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:49 compute-0 python3.9[56028]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:28:49 compute-0 sudo[56026]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:49 compute-0 sudo[56180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjssxziptctdpemzptgnyilyixvbhdco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062929.3443909-440-91669752317453/AnsiballZ_stat.py'
Nov 25 09:28:49 compute-0 sudo[56180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:49 compute-0 python3.9[56182]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:28:49 compute-0 sudo[56180]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:50 compute-0 sudo[56332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atdlletbxkjsfnxzbklesejnpiqovcwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062929.8871527-467-212492819770215/AnsiballZ_stat.py'
Nov 25 09:28:50 compute-0 sudo[56332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:50 compute-0 python3.9[56334]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:28:50 compute-0 sudo[56332]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:50 compute-0 sudo[56484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpohxlsggfhhjmigelictlgndzzscshe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062930.545083-497-71867934854966/AnsiballZ_command.py'
Nov 25 09:28:50 compute-0 sudo[56484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:50 compute-0 python3.9[56486]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:28:50 compute-0 sudo[56484]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:51 compute-0 sudo[56637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nytqiggcexeavkxrmuxazuzljizpqnqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062931.1932633-527-174747732676500/AnsiballZ_service_facts.py'
Nov 25 09:28:51 compute-0 sudo[56637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:51 compute-0 python3.9[56639]: ansible-service_facts Invoked
Nov 25 09:28:51 compute-0 network[56656]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:28:51 compute-0 network[56657]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:28:51 compute-0 network[56658]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:28:53 compute-0 sudo[56637]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:54 compute-0 sudo[56941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhzotvmrvoymtxwulgeeboobvwqdplsu ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764062934.507249-572-61972582068949/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764062934.507249-572-61972582068949/args'
Nov 25 09:28:54 compute-0 sudo[56941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:54 compute-0 sudo[56941]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:55 compute-0 sudo[57108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzcdjzmnsmczokpfsxzznmmwkoonfnzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062935.027963-605-197085647249214/AnsiballZ_dnf.py'
Nov 25 09:28:55 compute-0 sudo[57108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:55 compute-0 python3.9[57110]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:28:56 compute-0 sudo[57108]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:57 compute-0 sudo[57261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pepywdybtqjgbxbqvahkacqmsmelpdjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062936.9385734-644-93525713521193/AnsiballZ_package_facts.py'
Nov 25 09:28:57 compute-0 sudo[57261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:57 compute-0 python3.9[57263]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 09:28:57 compute-0 sudo[57261]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:58 compute-0 sudo[57413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tusbdkbuieuskrszfeazaogxtlugyqjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062938.4042425-674-147170629332714/AnsiballZ_stat.py'
Nov 25 09:28:58 compute-0 sudo[57413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:58 compute-0 python3.9[57415]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:28:58 compute-0 sudo[57413]: pam_unix(sudo:session): session closed for user root
Nov 25 09:28:59 compute-0 sudo[57538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-equyzwoqcnmaotxgepheyerhprobrlzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062938.4042425-674-147170629332714/AnsiballZ_copy.py'
Nov 25 09:28:59 compute-0 sudo[57538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:28:59 compute-0 python3.9[57540]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062938.4042425-674-147170629332714/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:28:59 compute-0 sudo[57538]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:00 compute-0 sudo[57692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvezlzhmgzmigifzongzjyxrjkpvcawk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062939.9596753-719-121390186121513/AnsiballZ_stat.py'
Nov 25 09:29:00 compute-0 sudo[57692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:00 compute-0 python3.9[57694]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:00 compute-0 sudo[57692]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:00 compute-0 sudo[57817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soybsksutmfgtwhzrjcakiasrykjeurr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062939.9596753-719-121390186121513/AnsiballZ_copy.py'
Nov 25 09:29:00 compute-0 sudo[57817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:00 compute-0 python3.9[57819]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062939.9596753-719-121390186121513/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:00 compute-0 sudo[57817]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:02 compute-0 sudo[57971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aflaynxdccdxgiqylfydidpibihdzpwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062941.670272-782-244597630418201/AnsiballZ_lineinfile.py'
Nov 25 09:29:02 compute-0 sudo[57971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:02 compute-0 python3.9[57973]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:02 compute-0 sudo[57971]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:03 compute-0 sudo[58125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdbykmzrrcamyeysdhmnntexdrejzbhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062943.2291596-827-93444413018132/AnsiballZ_setup.py'
Nov 25 09:29:03 compute-0 sudo[58125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:03 compute-0 python3.9[58127]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:29:03 compute-0 sudo[58125]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:04 compute-0 sudo[58209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqcszdqcvneuqupjwrajltfkrnbpoigq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062943.2291596-827-93444413018132/AnsiballZ_systemd.py'
Nov 25 09:29:04 compute-0 sudo[58209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:04 compute-0 python3.9[58211]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:29:04 compute-0 sudo[58209]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:05 compute-0 sudo[58363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onlggalwhgcltjlzfuugqotzeewafogt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062945.2558005-875-140473987141907/AnsiballZ_setup.py'
Nov 25 09:29:05 compute-0 sudo[58363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:05 compute-0 python3.9[58365]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:29:05 compute-0 sudo[58363]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:06 compute-0 sudo[58447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbwvoqfdikleqtfccgxyqyljaxdkqdlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062945.2558005-875-140473987141907/AnsiballZ_systemd.py'
Nov 25 09:29:06 compute-0 sudo[58447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:06 compute-0 python3.9[58449]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:29:06 compute-0 chronyd[752]: chronyd exiting
Nov 25 09:29:06 compute-0 systemd[1]: Stopping NTP client/server...
Nov 25 09:29:06 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 25 09:29:06 compute-0 systemd[1]: Stopped NTP client/server.
Nov 25 09:29:06 compute-0 systemd[1]: Starting NTP client/server...
Nov 25 09:29:06 compute-0 chronyd[58457]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 09:29:06 compute-0 chronyd[58457]: Frequency -9.998 +/- 0.623 ppm read from /var/lib/chrony/drift
Nov 25 09:29:06 compute-0 chronyd[58457]: Loaded seccomp filter (level 2)
Nov 25 09:29:06 compute-0 systemd[1]: Started NTP client/server.
Nov 25 09:29:06 compute-0 sudo[58447]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:06 compute-0 sshd-session[53509]: Connection closed by 192.168.122.30 port 38172
Nov 25 09:29:06 compute-0 sshd-session[53506]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:29:06 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 25 09:29:06 compute-0 systemd[1]: session-12.scope: Consumed 17.297s CPU time.
Nov 25 09:29:06 compute-0 systemd-logind[744]: Session 12 logged out. Waiting for processes to exit.
Nov 25 09:29:06 compute-0 systemd-logind[744]: Removed session 12.
Nov 25 09:29:12 compute-0 sshd-session[58483]: Accepted publickey for zuul from 192.168.122.30 port 43578 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:29:12 compute-0 systemd-logind[744]: New session 13 of user zuul.
Nov 25 09:29:12 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 25 09:29:12 compute-0 sshd-session[58483]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:29:12 compute-0 sudo[58636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcrowykmskmzfekltqifdyujtbpehoef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062952.3500109-26-28159878640655/AnsiballZ_file.py'
Nov 25 09:29:12 compute-0 sudo[58636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:12 compute-0 python3.9[58638]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:12 compute-0 sudo[58636]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:13 compute-0 sudo[58788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpgfhzbluncbsepojwxgoogtyrfnenyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062952.987302-62-137212485504030/AnsiballZ_stat.py'
Nov 25 09:29:13 compute-0 sudo[58788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:13 compute-0 python3.9[58790]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:13 compute-0 sudo[58788]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:13 compute-0 sudo[58911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcxpdbqjhnrtovkfoqvnpmurtmwnqwbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062952.987302-62-137212485504030/AnsiballZ_copy.py'
Nov 25 09:29:13 compute-0 sudo[58911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:14 compute-0 python3.9[58913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062952.987302-62-137212485504030/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:14 compute-0 sudo[58911]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:14 compute-0 sshd-session[58486]: Connection closed by 192.168.122.30 port 43578
Nov 25 09:29:14 compute-0 sshd-session[58483]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:29:14 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 25 09:29:14 compute-0 systemd[1]: session-13.scope: Consumed 1.088s CPU time.
Nov 25 09:29:14 compute-0 systemd-logind[744]: Session 13 logged out. Waiting for processes to exit.
Nov 25 09:29:14 compute-0 systemd-logind[744]: Removed session 13.
Nov 25 09:29:20 compute-0 sshd-session[58938]: Accepted publickey for zuul from 192.168.122.30 port 58376 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:29:20 compute-0 systemd-logind[744]: New session 14 of user zuul.
Nov 25 09:29:20 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 25 09:29:20 compute-0 sshd-session[58938]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:29:21 compute-0 python3.9[59091]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:29:21 compute-0 sudo[59245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deaojckwtlpzpbhxwjfhpfqmgiopkynf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062961.571736-59-29791274110718/AnsiballZ_file.py'
Nov 25 09:29:21 compute-0 sudo[59245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:22 compute-0 python3.9[59247]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:22 compute-0 sudo[59245]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:22 compute-0 sudo[59420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzhrwozjpehwugmjfayhanqliggxofrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062962.1710284-83-216583822640457/AnsiballZ_stat.py'
Nov 25 09:29:22 compute-0 sudo[59420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:22 compute-0 python3.9[59422]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:22 compute-0 sudo[59420]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:23 compute-0 sudo[59543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hefobjayrvcjejewkunixnurylayeace ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062962.1710284-83-216583822640457/AnsiballZ_copy.py'
Nov 25 09:29:23 compute-0 sudo[59543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:23 compute-0 python3.9[59545]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764062962.1710284-83-216583822640457/.source.json _original_basename=.r_59jugw follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:23 compute-0 sudo[59543]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:23 compute-0 sudo[59695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrjrbftmchqedknxrrxrspaqnwoctxae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062963.6211195-152-152368515916176/AnsiballZ_stat.py'
Nov 25 09:29:23 compute-0 sudo[59695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:23 compute-0 python3.9[59697]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:23 compute-0 sudo[59695]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:24 compute-0 sudo[59818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbfdalcozpvrjcdmyniytmqdvkwqucfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062963.6211195-152-152368515916176/AnsiballZ_copy.py'
Nov 25 09:29:24 compute-0 sudo[59818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:24 compute-0 python3.9[59820]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062963.6211195-152-152368515916176/.source _original_basename=.l9mwry5j follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:24 compute-0 sudo[59818]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:24 compute-0 sudo[59970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utlebdfpcwkonrafggyidiayehrhtxwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062964.5254564-200-213189067725662/AnsiballZ_file.py'
Nov 25 09:29:24 compute-0 sudo[59970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:24 compute-0 python3.9[59972]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:29:24 compute-0 sudo[59970]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:25 compute-0 sudo[60122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwxoiapvbdwkrramnvelzxsdvlmtlgac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062965.011599-224-167202754416963/AnsiballZ_stat.py'
Nov 25 09:29:25 compute-0 sudo[60122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:25 compute-0 python3.9[60124]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:25 compute-0 sudo[60122]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:25 compute-0 sudo[60245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awowvaeqdbemozggmqwrstlitihwrzvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062965.011599-224-167202754416963/AnsiballZ_copy.py'
Nov 25 09:29:25 compute-0 sudo[60245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:25 compute-0 python3.9[60247]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062965.011599-224-167202754416963/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:29:25 compute-0 sudo[60245]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:25 compute-0 sudo[60397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpazlymkcogmkaljrvuprnmyzmwhggbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062965.8236558-224-188702910556755/AnsiballZ_stat.py'
Nov 25 09:29:25 compute-0 sudo[60397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:26 compute-0 python3.9[60399]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:26 compute-0 sudo[60397]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:26 compute-0 sudo[60520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbahjvfixdjbwilmrpqzofjikfcpdvyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062965.8236558-224-188702910556755/AnsiballZ_copy.py'
Nov 25 09:29:26 compute-0 sudo[60520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:26 compute-0 python3.9[60522]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062965.8236558-224-188702910556755/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:29:26 compute-0 sudo[60520]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:26 compute-0 sudo[60672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqvbaqnzndhrnnivvpvqeolaiaxblbkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062966.691574-311-223128705103667/AnsiballZ_file.py'
Nov 25 09:29:26 compute-0 sudo[60672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:27 compute-0 python3.9[60674]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:27 compute-0 sudo[60672]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:27 compute-0 sudo[60824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xewmqybtbyztwsfwskngknsrsxwsuhdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062967.1672022-335-104447513618241/AnsiballZ_stat.py'
Nov 25 09:29:27 compute-0 sudo[60824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:27 compute-0 python3.9[60826]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:27 compute-0 sudo[60824]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:27 compute-0 sudo[60947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjdpurecmcpycfrbstmzrzrimwtvqavh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062967.1672022-335-104447513618241/AnsiballZ_copy.py'
Nov 25 09:29:27 compute-0 sudo[60947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:27 compute-0 python3.9[60949]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062967.1672022-335-104447513618241/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:27 compute-0 sudo[60947]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:28 compute-0 sudo[61099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjdeiwhqamqercljccaznuwqbwwnqpmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062968.025375-380-143681502631518/AnsiballZ_stat.py'
Nov 25 09:29:28 compute-0 sudo[61099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:28 compute-0 python3.9[61101]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:28 compute-0 sudo[61099]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:28 compute-0 sudo[61222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niyxliitafysmqbdbbjlhruundqrindq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062968.025375-380-143681502631518/AnsiballZ_copy.py'
Nov 25 09:29:28 compute-0 sudo[61222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:28 compute-0 python3.9[61224]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062968.025375-380-143681502631518/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:28 compute-0 sudo[61222]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:29 compute-0 sudo[61374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycaaswopqbfmlhxrqadrogrxzsffriav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062968.900401-425-122783756482752/AnsiballZ_systemd.py'
Nov 25 09:29:29 compute-0 sudo[61374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:29 compute-0 python3.9[61376]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:29:29 compute-0 systemd[1]: Reloading.
Nov 25 09:29:29 compute-0 systemd-sysv-generator[61400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:29:29 compute-0 systemd-rc-local-generator[61397]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:29:29 compute-0 systemd[1]: Reloading.
Nov 25 09:29:29 compute-0 systemd-rc-local-generator[61434]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:29:29 compute-0 systemd-sysv-generator[61437]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:29:29 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 25 09:29:29 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 25 09:29:29 compute-0 sudo[61374]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:30 compute-0 sudo[61601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtocvcolodjeydsxuydjtzlnqrohxcxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062970.1363003-449-35900272641265/AnsiballZ_stat.py'
Nov 25 09:29:30 compute-0 sudo[61601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:30 compute-0 python3.9[61603]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:30 compute-0 sudo[61601]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:30 compute-0 sudo[61724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrgtdmapkefmmrhxsvkbzjpeecnjquji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062970.1363003-449-35900272641265/AnsiballZ_copy.py'
Nov 25 09:29:30 compute-0 sudo[61724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:30 compute-0 python3.9[61726]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062970.1363003-449-35900272641265/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:30 compute-0 sudo[61724]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:31 compute-0 sudo[61876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voauzevpfzrdufctpecqiimnoppbvtgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062970.9950278-494-274056055974475/AnsiballZ_stat.py'
Nov 25 09:29:31 compute-0 sudo[61876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:31 compute-0 python3.9[61878]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:31 compute-0 sudo[61876]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:31 compute-0 sudo[61999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lebfwlpzuimkzqvimkonufjhsfwkhprh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062970.9950278-494-274056055974475/AnsiballZ_copy.py'
Nov 25 09:29:31 compute-0 sudo[61999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:31 compute-0 python3.9[62001]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062970.9950278-494-274056055974475/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:31 compute-0 sudo[61999]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:32 compute-0 sudo[62151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkjlbjskpbifwrcuxnnurmhawdpxewlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062971.8384778-539-130963598937958/AnsiballZ_systemd.py'
Nov 25 09:29:32 compute-0 sudo[62151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:32 compute-0 python3.9[62153]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:29:32 compute-0 systemd[1]: Reloading.
Nov 25 09:29:32 compute-0 systemd-rc-local-generator[62174]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:29:32 compute-0 systemd-sysv-generator[62180]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:29:32 compute-0 systemd[1]: Reloading.
Nov 25 09:29:32 compute-0 systemd-rc-local-generator[62211]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:29:32 compute-0 systemd-sysv-generator[62214]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:29:32 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 09:29:32 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 09:29:32 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 09:29:32 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 09:29:32 compute-0 sudo[62151]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:33 compute-0 python3.9[62379]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:29:33 compute-0 network[62396]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:29:33 compute-0 network[62397]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:29:33 compute-0 network[62398]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:29:35 compute-0 sudo[62658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxifyoyicjkmedrlktylipktkbksvmlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062975.470244-587-94334579332443/AnsiballZ_systemd.py'
Nov 25 09:29:35 compute-0 sudo[62658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:35 compute-0 python3.9[62660]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:29:35 compute-0 systemd[1]: Reloading.
Nov 25 09:29:36 compute-0 systemd-rc-local-generator[62686]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:29:36 compute-0 systemd-sysv-generator[62689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:29:36 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 25 09:29:36 compute-0 iptables.init[62699]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 25 09:29:36 compute-0 iptables.init[62699]: iptables: Flushing firewall rules: [  OK  ]
Nov 25 09:29:36 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 25 09:29:36 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 25 09:29:36 compute-0 sudo[62658]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:36 compute-0 sudo[62893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yczhistvnnqczbyzzcvejwzhxxnmlzuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062976.485226-587-250628753081436/AnsiballZ_systemd.py'
Nov 25 09:29:36 compute-0 sudo[62893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:36 compute-0 python3.9[62895]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:29:36 compute-0 sudo[62893]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:37 compute-0 sudo[63047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kffdviikmgsupjpynvcsoicxodzefmbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062977.4748132-635-22774111079352/AnsiballZ_systemd.py'
Nov 25 09:29:37 compute-0 sudo[63047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:37 compute-0 python3.9[63049]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:29:37 compute-0 systemd[1]: Reloading.
Nov 25 09:29:37 compute-0 systemd-rc-local-generator[63074]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:29:38 compute-0 systemd-sysv-generator[63077]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:29:38 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 25 09:29:38 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 25 09:29:38 compute-0 sudo[63047]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:38 compute-0 sudo[63239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsfahmqgnoqymuhxrgskierdpcjtnenr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062978.4202845-659-229612486301924/AnsiballZ_command.py'
Nov 25 09:29:38 compute-0 sudo[63239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:38 compute-0 python3.9[63241]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:29:38 compute-0 sudo[63239]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:39 compute-0 sudo[63392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljuybqftsajzcsgfmyrcebbsbuexbmho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062979.4661741-701-207714936881184/AnsiballZ_stat.py'
Nov 25 09:29:39 compute-0 sudo[63392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:39 compute-0 python3.9[63394]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:39 compute-0 sudo[63392]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:40 compute-0 sudo[63517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhnblyndjjzxscfayigkhlqhfzhcondf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062979.4661741-701-207714936881184/AnsiballZ_copy.py'
Nov 25 09:29:40 compute-0 sudo[63517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:40 compute-0 python3.9[63519]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062979.4661741-701-207714936881184/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:40 compute-0 sudo[63517]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:40 compute-0 sudo[63670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qonsuvoxmehwihrrytsoynphtnxjdzkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062980.4832044-746-135296974693236/AnsiballZ_systemd.py'
Nov 25 09:29:40 compute-0 sudo[63670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:40 compute-0 python3.9[63672]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:29:40 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 25 09:29:40 compute-0 sshd[962]: Received SIGHUP; restarting.
Nov 25 09:29:40 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 25 09:29:40 compute-0 sshd[962]: Server listening on 0.0.0.0 port 22.
Nov 25 09:29:40 compute-0 sshd[962]: Server listening on :: port 22.
Nov 25 09:29:40 compute-0 sudo[63670]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:41 compute-0 sudo[63826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylobivtlodbgwjekevxedthsxbukrdyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062981.1754498-770-50586686025706/AnsiballZ_file.py'
Nov 25 09:29:41 compute-0 sudo[63826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:41 compute-0 python3.9[63828]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:41 compute-0 sudo[63826]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:41 compute-0 sudo[63978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnndtztjahopuhhkmqlhskgwpilpvxic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062981.7001739-794-98003973853255/AnsiballZ_stat.py'
Nov 25 09:29:41 compute-0 sudo[63978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:42 compute-0 python3.9[63980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:42 compute-0 sudo[63978]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:42 compute-0 sudo[64101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwgsifrtoglmaclgswcpyevtykyaiaxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062981.7001739-794-98003973853255/AnsiballZ_copy.py'
Nov 25 09:29:42 compute-0 sudo[64101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:42 compute-0 python3.9[64103]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062981.7001739-794-98003973853255/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:42 compute-0 sudo[64101]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:43 compute-0 sudo[64253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoqnjanbbktlnvcgdqjjxhjapqqgmgga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062982.877994-848-194690404696440/AnsiballZ_timezone.py'
Nov 25 09:29:43 compute-0 sudo[64253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:43 compute-0 python3.9[64255]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 09:29:43 compute-0 systemd[1]: Starting Time & Date Service...
Nov 25 09:29:43 compute-0 systemd[1]: Started Time & Date Service.
Nov 25 09:29:43 compute-0 sudo[64253]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:43 compute-0 sudo[64409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reobthuxwlsafweomurnvvdhtrozhjtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062983.675787-875-93086767276830/AnsiballZ_file.py'
Nov 25 09:29:43 compute-0 sudo[64409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:44 compute-0 python3.9[64411]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:44 compute-0 sudo[64409]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:44 compute-0 sudo[64561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxgblkmzuxlkdjqidtlcyiffpwayhuuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062984.210714-899-55118461485826/AnsiballZ_stat.py'
Nov 25 09:29:44 compute-0 sudo[64561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:44 compute-0 python3.9[64563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:44 compute-0 sudo[64561]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:44 compute-0 sudo[64684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzdasrkcpeiaicskcilckifivijwnlbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062984.210714-899-55118461485826/AnsiballZ_copy.py'
Nov 25 09:29:44 compute-0 sudo[64684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:44 compute-0 python3.9[64686]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062984.210714-899-55118461485826/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:44 compute-0 sudo[64684]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:45 compute-0 sudo[64836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaeyifnxbfwpsoibrssywmhfowxoqdtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062985.1073794-944-9906286701490/AnsiballZ_stat.py'
Nov 25 09:29:45 compute-0 sudo[64836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:45 compute-0 python3.9[64838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:45 compute-0 sudo[64836]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:45 compute-0 sudo[64959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmtssaajxmhqtukomgwoovtfvbawzdvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062985.1073794-944-9906286701490/AnsiballZ_copy.py'
Nov 25 09:29:45 compute-0 sudo[64959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:45 compute-0 python3.9[64961]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062985.1073794-944-9906286701490/.source.yaml _original_basename=.u1893c2l follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:45 compute-0 sudo[64959]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:46 compute-0 sudo[65111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zticaebgmhqmqvarvmochzadcdrkiqyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062986.066625-989-30982347795928/AnsiballZ_stat.py'
Nov 25 09:29:46 compute-0 sudo[65111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:46 compute-0 python3.9[65113]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:46 compute-0 sudo[65111]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:46 compute-0 sudo[65234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vssmcfushufedfkkgyesfhdfikreaavz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062986.066625-989-30982347795928/AnsiballZ_copy.py'
Nov 25 09:29:46 compute-0 sudo[65234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:46 compute-0 python3.9[65236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062986.066625-989-30982347795928/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:46 compute-0 sudo[65234]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:47 compute-0 sudo[65386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aavpyborfilppswwsxcfgepbeexatirb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062986.9892385-1034-169420291757787/AnsiballZ_command.py'
Nov 25 09:29:47 compute-0 sudo[65386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:47 compute-0 python3.9[65388]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:29:47 compute-0 sudo[65386]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:47 compute-0 sudo[65539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khahqwpuwfjrliwjlpiomtnwfjemwzey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062987.4839153-1058-206738411193102/AnsiballZ_command.py'
Nov 25 09:29:47 compute-0 sudo[65539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:47 compute-0 python3.9[65541]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:29:47 compute-0 sudo[65539]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:48 compute-0 sudo[65692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxelzclfvseuutjeofakgjwvkamcyqyu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764062987.9768667-1082-35430028319142/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 09:29:48 compute-0 sudo[65692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:48 compute-0 python3[65694]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 09:29:48 compute-0 sudo[65692]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:48 compute-0 sudo[65844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eewkpqjqbbpesmwhpqlxdycqrcrfhxuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062988.5981455-1106-104964174583519/AnsiballZ_stat.py'
Nov 25 09:29:48 compute-0 sudo[65844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:48 compute-0 python3.9[65846]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:48 compute-0 sudo[65844]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:49 compute-0 sudo[65967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewwqajgsqbmneikgnrqcxdexfjkjmgvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062988.5981455-1106-104964174583519/AnsiballZ_copy.py'
Nov 25 09:29:49 compute-0 sudo[65967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:49 compute-0 python3.9[65969]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062988.5981455-1106-104964174583519/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:49 compute-0 sudo[65967]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:49 compute-0 sudo[66119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-didhedsurqlpxlgigiaoykzemnxyyedc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062989.5294583-1151-246259990814505/AnsiballZ_stat.py'
Nov 25 09:29:49 compute-0 sudo[66119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:49 compute-0 python3.9[66121]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:49 compute-0 sudo[66119]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:50 compute-0 sudo[66242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdrrqmyypwfbwmqoqskkpuikkldgriso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062989.5294583-1151-246259990814505/AnsiballZ_copy.py'
Nov 25 09:29:50 compute-0 sudo[66242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:50 compute-0 python3.9[66244]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062989.5294583-1151-246259990814505/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:50 compute-0 sudo[66242]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:50 compute-0 sudo[66394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dczwwpchztujmdrlvcenipxdctftougt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062990.464882-1196-184685370178106/AnsiballZ_stat.py'
Nov 25 09:29:50 compute-0 sudo[66394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:50 compute-0 python3.9[66396]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:50 compute-0 sudo[66394]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:51 compute-0 sudo[66517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soaavnlfwlirozyaunebamazpzbozgjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062990.464882-1196-184685370178106/AnsiballZ_copy.py'
Nov 25 09:29:51 compute-0 sudo[66517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:51 compute-0 python3.9[66519]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062990.464882-1196-184685370178106/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:51 compute-0 sudo[66517]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:51 compute-0 sudo[66669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyboyqxdngacllsqrwtfyjdnhnoakyao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062991.4012725-1241-60031904434341/AnsiballZ_stat.py'
Nov 25 09:29:51 compute-0 sudo[66669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:51 compute-0 python3.9[66671]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:51 compute-0 sudo[66669]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:52 compute-0 sudo[66792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utbybcfvsjnivrlxzifocdrbahlnmzpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062991.4012725-1241-60031904434341/AnsiballZ_copy.py'
Nov 25 09:29:52 compute-0 sudo[66792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:52 compute-0 python3.9[66794]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062991.4012725-1241-60031904434341/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:52 compute-0 sudo[66792]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:52 compute-0 sudo[66944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoltugajhuntroadqqargmujpfxjbxwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062992.3386989-1286-83212896959674/AnsiballZ_stat.py'
Nov 25 09:29:52 compute-0 sudo[66944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:52 compute-0 python3.9[66946]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:29:52 compute-0 sudo[66944]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:52 compute-0 sudo[67067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvvijhyplejvrxfsgzrkmfhshebmhbjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062992.3386989-1286-83212896959674/AnsiballZ_copy.py'
Nov 25 09:29:52 compute-0 sudo[67067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:53 compute-0 python3.9[67069]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062992.3386989-1286-83212896959674/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:53 compute-0 sudo[67067]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:53 compute-0 sudo[67219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trkdydcqslgoguremgvfgwjbwscaatwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062993.3680835-1331-120321022394386/AnsiballZ_file.py'
Nov 25 09:29:53 compute-0 sudo[67219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:53 compute-0 python3.9[67221]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:53 compute-0 sudo[67219]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:54 compute-0 sudo[67371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uryuzehrrbsqpknjfwqzyjhvwktsukzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062993.9086163-1355-38738819245108/AnsiballZ_command.py'
Nov 25 09:29:54 compute-0 sudo[67371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:54 compute-0 python3.9[67373]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:29:54 compute-0 sudo[67371]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:54 compute-0 sudo[67530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnvnfjfjwvxuxwdjmmosrmbyedkhdgqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062994.4693491-1379-235963180534180/AnsiballZ_blockinfile.py'
Nov 25 09:29:54 compute-0 sudo[67530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:54 compute-0 python3.9[67532]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:55 compute-0 sudo[67530]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:55 compute-0 sudo[67683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knkxsgcldozmjxhjyhdvsvrntfjmjfff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062995.2512996-1406-113963830368442/AnsiballZ_file.py'
Nov 25 09:29:55 compute-0 sudo[67683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:55 compute-0 python3.9[67685]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:55 compute-0 sudo[67683]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:55 compute-0 sudo[67835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkzezkzhsxtoektnmieupapyuiscazsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062995.7207575-1406-125443178133654/AnsiballZ_file.py'
Nov 25 09:29:55 compute-0 sudo[67835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:56 compute-0 python3.9[67837]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:29:56 compute-0 sudo[67835]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:56 compute-0 sudo[67987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxnjqvxbcelknzdbfsksluyamprytrnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062996.3351393-1451-260841439005996/AnsiballZ_mount.py'
Nov 25 09:29:56 compute-0 sudo[67987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:56 compute-0 python3.9[67989]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 09:29:56 compute-0 sudo[67987]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:57 compute-0 sudo[68140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upadyuydzzpmuwlrlalkucufsrstkkxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764062996.9746337-1451-139443312510058/AnsiballZ_mount.py'
Nov 25 09:29:57 compute-0 sudo[68140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:29:57 compute-0 python3.9[68142]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 09:29:57 compute-0 sudo[68140]: pam_unix(sudo:session): session closed for user root
Nov 25 09:29:57 compute-0 sshd-session[58941]: Connection closed by 192.168.122.30 port 58376
Nov 25 09:29:57 compute-0 sshd-session[58938]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:29:57 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 25 09:29:57 compute-0 systemd[1]: session-14.scope: Consumed 23.521s CPU time.
Nov 25 09:29:57 compute-0 systemd-logind[744]: Session 14 logged out. Waiting for processes to exit.
Nov 25 09:29:57 compute-0 systemd-logind[744]: Removed session 14.
Nov 25 09:30:03 compute-0 sshd-session[68168]: Accepted publickey for zuul from 192.168.122.30 port 56008 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:30:03 compute-0 systemd-logind[744]: New session 15 of user zuul.
Nov 25 09:30:03 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 25 09:30:03 compute-0 sshd-session[68168]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:30:03 compute-0 sudo[68321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttwvtziqbfiguxvevmcufsiziqzzkqoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063003.5677686-18-91273851966970/AnsiballZ_tempfile.py'
Nov 25 09:30:03 compute-0 sudo[68321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:04 compute-0 python3.9[68323]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 09:30:04 compute-0 sudo[68321]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:04 compute-0 sudo[68473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkeiyufajmqogmgnhrpuewvpfugamagy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063004.2603104-54-181897503949615/AnsiballZ_stat.py'
Nov 25 09:30:04 compute-0 sudo[68473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:04 compute-0 python3.9[68475]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:30:04 compute-0 sudo[68473]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:05 compute-0 sudo[68625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aojcjbxuvwurhhrtxomhajsehrqxrhsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063004.9277055-84-168508814703311/AnsiballZ_setup.py'
Nov 25 09:30:05 compute-0 sudo[68625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:05 compute-0 python3.9[68627]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:30:05 compute-0 sudo[68625]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:06 compute-0 sudo[68777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daubdqfunxnmuvgrstzgdedvpywrapvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063005.8328469-109-220804782197290/AnsiballZ_blockinfile.py'
Nov 25 09:30:06 compute-0 sudo[68777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:06 compute-0 python3.9[68779]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBYH+LEkGk38QCoX+uCPb3zHk7+XCeEWV22HpalqUrYF70U5Myra5/E2/v2kioqGNh5TR9q+A7kNO0JU78Ai+6UBv5aJlbEptu33E5t38qiAv3rpyypYwQ8PdWBl7OCeDcqz0EyYAZEw7rLbCWimqRhYsSXuUND+rRboiuI8DEX229oAgnRmIjyPJTTdKGiM3FTdl9YiSbYNyBykzJ8AugCfme4+hmds+8LJloh2aJjRJCs3/GvxdaGJcjBWAqN3Aurg+gPekKe4fwmOir2+KpqBDQE9YMfiBvraaCMGrDXkAjPdsycsvGMsWckhOgEW5qpTIt+ca5kcrK43ChAH5R/PpHlHnEYqw2o26BLmqIejfmXKRSxmH/Fq9Ldj3DMLJr4NTFBfJAl8wqsUKs6/0jngwOCYz6NLs7GgGZLMYv6wbRVgUpCc4ikQ8f1EDmXTdtqxef+QdmLTgWY1qCqe5lL8BcDDCjOTLJ6bbLUAdubY1z4vb6SFVcamH4SkSCFxs=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGHCQQOw3EbtZ2XAFA2gGrEnb7MaEAFwIJjyskket7pD
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFP8ctNKDLqIcODtgMol02WD/NgFM5ja/WeN20e07JH/Mz/Ge/v2/ybsY8LOtiyzixlX47XT8hWBR4IBwS2uvfM=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/QqShzRf5Fxs30q3tSf7IhrByfRVQwrs4CVW/gcd2Sdcp7tmVXVNFpJc8XlgTmWxcSLbFtAv0HgJOJ3p6/+g394nChAIaM55uhK/RLFqBZ/byiFqEjvN2LkEWuUVdvbZM808GhONJnWQtg70nn99jeLP34zkSD7gsU7cykxF7K7VyeBfeSiuOcyTjXvVfXr9TZxCZMrsb4eWFZAZ4QERXITlLcZthwc0kd17QWJWLo8Ssv4Qu0DtCHtqHO07s7Nz/CpSs0TX5jVM+C+2rAMn+aAZ4J25X8di4ABF5tO27d+ePazRlU5PWjb8n6kdy1B/cjHgvajXOoUPb5RjyVx2IgULBXaWsIRO23wp8YqiE1OdTly2+Nr5KiTPvR5yqq9C6aBNzS7YyUQc6Rf2RBAaLQbA36NJLGvPUWC7iYVtWdGoTfcTmzqkD2s3hzZl+zU2xNS0IpwByJsOJVIijtGFh1Y45uujq0WUJNPf1ayrY2Z/TV+iO/1iah3JArjyNiq8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMPD1sScOy6Aiq5PZkl3KepHqJnvlMIZW4R0DzMl4b3w
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO/iVb5vehoW1eqrk4jdR3j25kacpoWkaPIq4PHAndTN4lXAEwSRab7iUqXkAAaYvUnrCJ86WUoAYGkII0QB5wA=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSE1VMIuB9MiQ17/QHDRAbfwrBNbTb+wZH1rCqeQvAxcHqZYp6TugJnyWX+nah5oDk8vz2PCIUW2lm/tVgP4Y2JHeaN2uMNgVnz1WtD6lCQORMYi1R+KpBgiAQoZAjAyC5Ugx5LWbDvrwtpt0zi2DEgCr2Zao5DG5UAaIcs7/Rj2LRx3hgA4jJ9xJKHVi5bUZfjIlWxLzVXVYT+dvUNrZoiVMBcaUMZRpU4tJ/76mE2jbqsfHEPFwHZ6ljoIegFbzNYoKYMCPK+DeOs/73xD4r/nzeQOK3IQzMOEEVaUYvceA+EPX4M+MrKfkNrJwf35qTOFJpb368gJsebA9uXjzPfzX/uh1atxLv5SihEzC5fHdiZ3BZ3wLEy0C7lvXyRBZdQx+anEYQnDepM/ThOT4YR2BNSCdRS2OpzeSJDS+o5CS++zCqWM4yI3lufZm8O8JqPEblV518196TSyMlAOzPbjEjrUaYGdljY5S2OzKA4PBJW4hW4RyBtjcZWJBpNlM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBoG9NSSqw98oHfgpW8u+wJYHDhMiOjIhpCElLIROYdO
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFL1noqwoCl3YzxWiRl0GcsDxYERT1o8e2TvLqUkxWuv8xj0oHuq7+GhcKu7HpiCls71ko7MDcOX4zteG544k4=
                                             create=True mode=0644 path=/tmp/ansible._5vwixmq state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:30:06 compute-0 sudo[68777]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:06 compute-0 sudo[68929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuwxrlqgwvsuplkvtlgeengsqsihqmjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063006.4524634-133-223861585518562/AnsiballZ_command.py'
Nov 25 09:30:06 compute-0 sudo[68929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:06 compute-0 python3.9[68931]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible._5vwixmq' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:06 compute-0 sudo[68929]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:07 compute-0 sudo[69083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxvzvlcelxryupelgxqzruuntljgozyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063007.0404792-157-197353813170447/AnsiballZ_file.py'
Nov 25 09:30:07 compute-0 sudo[69083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:07 compute-0 python3.9[69085]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible._5vwixmq state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:30:07 compute-0 sudo[69083]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:07 compute-0 sshd-session[68171]: Connection closed by 192.168.122.30 port 56008
Nov 25 09:30:07 compute-0 sshd-session[68168]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:30:07 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 25 09:30:07 compute-0 systemd[1]: session-15.scope: Consumed 2.256s CPU time.
Nov 25 09:30:07 compute-0 systemd-logind[744]: Session 15 logged out. Waiting for processes to exit.
Nov 25 09:30:07 compute-0 systemd-logind[744]: Removed session 15.
Nov 25 09:30:13 compute-0 sshd-session[69110]: Accepted publickey for zuul from 192.168.122.30 port 36880 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:30:13 compute-0 systemd-logind[744]: New session 16 of user zuul.
Nov 25 09:30:13 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 25 09:30:13 compute-0 sshd-session[69110]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:30:13 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 09:30:14 compute-0 python3.9[69265]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:30:14 compute-0 sudo[69419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azgstrjywmofvpzksbrevazrgnvetieh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063014.4253893-56-276683964934940/AnsiballZ_systemd.py'
Nov 25 09:30:14 compute-0 sudo[69419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:15 compute-0 python3.9[69421]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 09:30:15 compute-0 sudo[69419]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:15 compute-0 sudo[69573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwkidpenvvaqdhwofxpjvjsxcduuktwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063015.2473714-80-255064895473761/AnsiballZ_systemd.py'
Nov 25 09:30:15 compute-0 sudo[69573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:15 compute-0 python3.9[69575]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:30:15 compute-0 sudo[69573]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:16 compute-0 sudo[69726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnficiebplcssxdipusorrhioddqxhye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063015.9244087-107-179898747975268/AnsiballZ_command.py'
Nov 25 09:30:16 compute-0 sudo[69726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:16 compute-0 python3.9[69728]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:16 compute-0 sudo[69726]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:16 compute-0 sudo[69879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgtjnlaibrothdgqrenkpafnpbqogbam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063016.5455527-131-94634834285963/AnsiballZ_stat.py'
Nov 25 09:30:16 compute-0 sudo[69879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:16 compute-0 python3.9[69881]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:30:17 compute-0 sudo[69879]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:17 compute-0 sudo[70033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gckghsyjvmsdbwvpcniqiquabkyigfqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063017.1420736-155-30053082820892/AnsiballZ_command.py'
Nov 25 09:30:17 compute-0 sudo[70033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:17 compute-0 python3.9[70035]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:17 compute-0 sudo[70033]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:17 compute-0 sudo[70188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpsymbgsdypisicjxlsfcbozjiqarqvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063017.626724-179-267174519544767/AnsiballZ_file.py'
Nov 25 09:30:17 compute-0 sudo[70188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:18 compute-0 python3.9[70190]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:30:18 compute-0 sudo[70188]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:18 compute-0 sshd-session[69113]: Connection closed by 192.168.122.30 port 36880
Nov 25 09:30:18 compute-0 sshd-session[69110]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:30:18 compute-0 systemd-logind[744]: Session 16 logged out. Waiting for processes to exit.
Nov 25 09:30:18 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 25 09:30:18 compute-0 systemd[1]: session-16.scope: Consumed 2.977s CPU time.
Nov 25 09:30:18 compute-0 systemd-logind[744]: Removed session 16.
Nov 25 09:30:23 compute-0 sshd-session[70215]: Accepted publickey for zuul from 192.168.122.30 port 46256 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:30:23 compute-0 systemd-logind[744]: New session 17 of user zuul.
Nov 25 09:30:23 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 25 09:30:23 compute-0 sshd-session[70215]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:30:24 compute-0 python3.9[70368]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:30:24 compute-0 sudo[70522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hevupepjwnobjwawfrobarvdjymhmsbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063024.591388-62-139019436154330/AnsiballZ_setup.py'
Nov 25 09:30:24 compute-0 sudo[70522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-0 python3.9[70524]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:30:25 compute-0 sudo[70522]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:25 compute-0 sudo[70606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrsxhhifevzpzrcdmqqyumfwuvphtuuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063024.591388-62-139019436154330/AnsiballZ_dnf.py'
Nov 25 09:30:25 compute-0 sudo[70606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:25 compute-0 python3.9[70608]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 09:30:26 compute-0 sudo[70606]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:27 compute-0 python3.9[70759]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:28 compute-0 python3.9[70910]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 09:30:28 compute-0 python3.9[71060]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:30:29 compute-0 python3.9[71210]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:30:29 compute-0 sshd-session[70218]: Connection closed by 192.168.122.30 port 46256
Nov 25 09:30:29 compute-0 sshd-session[70215]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:30:29 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 25 09:30:29 compute-0 systemd[1]: session-17.scope: Consumed 4.035s CPU time.
Nov 25 09:30:29 compute-0 systemd-logind[744]: Session 17 logged out. Waiting for processes to exit.
Nov 25 09:30:29 compute-0 systemd-logind[744]: Removed session 17.
Nov 25 09:30:36 compute-0 sshd-session[71235]: Accepted publickey for zuul from 192.168.26.191 port 39028 ssh2: RSA SHA256:s7IOmVGBFERPpXYPL/Wxp3ltfNRkS78sM3fXgIDzVB4
Nov 25 09:30:36 compute-0 systemd-logind[744]: New session 18 of user zuul.
Nov 25 09:30:36 compute-0 systemd[1]: Started Session 18 of User zuul.
Nov 25 09:30:36 compute-0 sshd-session[71235]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:30:36 compute-0 sudo[71311]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufocozxvcvgruyubropdbternfnvsyds ; /usr/bin/python3'
Nov 25 09:30:36 compute-0 sudo[71311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:36 compute-0 useradd[71315]: new group: name=ceph-admin, GID=42478
Nov 25 09:30:36 compute-0 useradd[71315]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 25 09:30:36 compute-0 sudo[71311]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:36 compute-0 sudo[71397]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sruxepjxbgcfosyemposryzkfcvuxpry ; /usr/bin/python3'
Nov 25 09:30:36 compute-0 sudo[71397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:36 compute-0 sudo[71397]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:37 compute-0 sudo[71470]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqxahmomoxvovuqgfmvpmyfyrpwojtfj ; /usr/bin/python3'
Nov 25 09:30:37 compute-0 sudo[71470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:37 compute-0 sudo[71470]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:37 compute-0 sudo[71520]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmstqwhxdxjwobppfxvecvxmtgknvijr ; /usr/bin/python3'
Nov 25 09:30:37 compute-0 sudo[71520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:37 compute-0 sudo[71520]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:37 compute-0 sudo[71546]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcmijzxvxtsfoxuhgiqluxoxarizsoyw ; /usr/bin/python3'
Nov 25 09:30:37 compute-0 sudo[71546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:37 compute-0 sudo[71546]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:37 compute-0 sudo[71572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umbobchpcyhnphtxxcdtwdarjmrxzebh ; /usr/bin/python3'
Nov 25 09:30:37 compute-0 sudo[71572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:37 compute-0 sudo[71572]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:38 compute-0 sudo[71598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grsqexkrjoezhengsnlpvzycrtzyrqtj ; /usr/bin/python3'
Nov 25 09:30:38 compute-0 sudo[71598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:38 compute-0 sudo[71598]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:38 compute-0 sudo[71676]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smkoujalnrvbxfwxjmgxwlompfgwfhkc ; /usr/bin/python3'
Nov 25 09:30:38 compute-0 sudo[71676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:38 compute-0 sudo[71676]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:38 compute-0 sudo[71749]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wniqozlwcrcmerlyzfjlyuncakamffnf ; /usr/bin/python3'
Nov 25 09:30:38 compute-0 sudo[71749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:38 compute-0 sudo[71749]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:39 compute-0 sudo[71851]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdhavcvolkigrfnypkxbjwrmtnrigwgt ; /usr/bin/python3'
Nov 25 09:30:39 compute-0 sudo[71851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:39 compute-0 sudo[71851]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:39 compute-0 sudo[71924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjjurrltaezjldoivzrwdgxnajfilfwr ; /usr/bin/python3'
Nov 25 09:30:39 compute-0 sudo[71924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:39 compute-0 sudo[71924]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:40 compute-0 sudo[71974]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orpzpuyahjpncnexoepukcccwtgrovfc ; /usr/bin/python3'
Nov 25 09:30:40 compute-0 sudo[71974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:40 compute-0 python3[71976]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:30:40 compute-0 sudo[71974]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:41 compute-0 sudo[72065]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awhwdhwdzzlacneycjhakamktehkzdxy ; /usr/bin/python3'
Nov 25 09:30:41 compute-0 sudo[72065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:41 compute-0 python3[72067]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 09:30:42 compute-0 sudo[72065]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:42 compute-0 sudo[72092]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjgtyzozdsnsrpztexmrrwoyvqvhktqh ; /usr/bin/python3'
Nov 25 09:30:42 compute-0 sudo[72092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:42 compute-0 python3[72094]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 09:30:42 compute-0 sudo[72092]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:42 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:30:42 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:30:42 compute-0 sudo[72119]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvnyzkzgunjngruwovjfstczbwmibgwo ; /usr/bin/python3'
Nov 25 09:30:42 compute-0 sudo[72119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:43 compute-0 python3[72121]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:43 compute-0 kernel: loop: module loaded
Nov 25 09:30:43 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 25 09:30:43 compute-0 sudo[72119]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:43 compute-0 sudo[72154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytxcptuosvqheyfbngcrmxpovlkedqig ; /usr/bin/python3'
Nov 25 09:30:43 compute-0 sudo[72154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:43 compute-0 python3[72156]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:43 compute-0 lvm[72159]: PV /dev/loop3 not used.
Nov 25 09:30:43 compute-0 lvm[72168]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:30:43 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 25 09:30:43 compute-0 sudo[72154]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:43 compute-0 lvm[72170]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 25 09:30:43 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 25 09:30:43 compute-0 sudo[72246]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcfztgyxxjdnrqxdbbmsdkziuvzxlteb ; /usr/bin/python3'
Nov 25 09:30:43 compute-0 sudo[72246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:43 compute-0 python3[72248]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:30:43 compute-0 sudo[72246]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:43 compute-0 sudo[72319]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgulbcvnjslcwiyhabqsqaltjdmpfupx ; /usr/bin/python3'
Nov 25 09:30:43 compute-0 sudo[72319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:44 compute-0 python3[72321]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063043.6179578-37200-13068069237905/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:30:44 compute-0 sudo[72319]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:44 compute-0 sudo[72369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdyvkqykabzlxpanlgntesdvllstuhsi ; /usr/bin/python3'
Nov 25 09:30:44 compute-0 sudo[72369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:44 compute-0 python3[72371]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:30:44 compute-0 systemd[1]: Reloading.
Nov 25 09:30:44 compute-0 systemd-sysv-generator[72398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:30:44 compute-0 systemd-rc-local-generator[72395]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:30:44 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 25 09:30:44 compute-0 bash[72412]: /dev/loop3: [64513]:4327758 (/var/lib/ceph-osd-0.img)
Nov 25 09:30:44 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 25 09:30:44 compute-0 lvm[72413]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:30:44 compute-0 lvm[72413]: VG ceph_vg0 finished
Nov 25 09:30:44 compute-0 sudo[72369]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:46 compute-0 python3[72438]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:30:48 compute-0 sudo[72529]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsrnqpysgikajazytsaxhjkaffylqjyw ; /usr/bin/python3'
Nov 25 09:30:48 compute-0 sudo[72529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:48 compute-0 python3[72531]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 09:30:50 compute-0 sudo[72529]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:50 compute-0 sudo[72587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqzymzuwnmkqsnavbzosjjbfhxnrwhdk ; /usr/bin/python3'
Nov 25 09:30:50 compute-0 sudo[72587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:50 compute-0 python3[72589]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 09:30:52 compute-0 groupadd[72599]: group added to /etc/group: name=cephadm, GID=992
Nov 25 09:30:52 compute-0 groupadd[72599]: group added to /etc/gshadow: name=cephadm
Nov 25 09:30:52 compute-0 groupadd[72599]: new group: name=cephadm, GID=992
Nov 25 09:30:52 compute-0 useradd[72606]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 25 09:30:52 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:30:52 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:30:52 compute-0 sudo[72587]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:52 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:30:52 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:30:52 compute-0 systemd[1]: run-ra7b3bd3e83304a09b59b13c55cc1caf4.service: Deactivated successfully.
Nov 25 09:30:52 compute-0 sudo[72702]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdctazugpnefiifffauhrgbdwtjwbsvj ; /usr/bin/python3'
Nov 25 09:30:52 compute-0 sudo[72702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:52 compute-0 python3[72704]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 09:30:52 compute-0 sudo[72702]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:53 compute-0 sudo[72730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdmznveodnhbfuchdgotaamtgkakjkzh ; /usr/bin/python3'
Nov 25 09:30:53 compute-0 sudo[72730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:53 compute-0 python3[72732]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:30:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:30:53 compute-0 sudo[72730]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:53 compute-0 sudo[72787]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miarzaeowznozbbztyiaprjorqptpuot ; /usr/bin/python3'
Nov 25 09:30:53 compute-0 sudo[72787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:53 compute-0 python3[72789]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:30:53 compute-0 sudo[72787]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:54 compute-0 sudo[72813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvqhgblrgeccuayortbsgjjdzfsbcbnj ; /usr/bin/python3'
Nov 25 09:30:54 compute-0 sudo[72813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:54 compute-0 python3[72815]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:30:54 compute-0 sudo[72813]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:30:54 compute-0 sudo[72891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxnrlosnzdxqygwzdrydfgfizadvtinb ; /usr/bin/python3'
Nov 25 09:30:54 compute-0 sudo[72891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:54 compute-0 python3[72893]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:30:54 compute-0 sudo[72891]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:54 compute-0 sudo[72964]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwdamhmjsdfkqosfxqyohldzvfahvsgm ; /usr/bin/python3'
Nov 25 09:30:54 compute-0 sudo[72964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:54 compute-0 python3[72966]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063054.421626-37392-72901294550191/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:30:54 compute-0 sudo[72964]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:55 compute-0 sudo[73066]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wegfotunfusgaeymrdjfbaxfcjnosycp ; /usr/bin/python3'
Nov 25 09:30:55 compute-0 sudo[73066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:55 compute-0 python3[73068]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:30:55 compute-0 sudo[73066]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:55 compute-0 sudo[73139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmioyduqdgjptpqrrgsicscvabuugixk ; /usr/bin/python3'
Nov 25 09:30:55 compute-0 sudo[73139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:55 compute-0 python3[73141]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063055.2032995-37410-142758048063751/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:30:55 compute-0 sudo[73139]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:55 compute-0 sudo[73189]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzmuwsdyjtqzmecavakoknbwwsppdvwz ; /usr/bin/python3'
Nov 25 09:30:55 compute-0 sudo[73189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:55 compute-0 python3[73191]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 09:30:55 compute-0 sudo[73189]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:56 compute-0 sudo[73217]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zniwbjcjrvbssfucrjusxkvnsqrcsyev ; /usr/bin/python3'
Nov 25 09:30:56 compute-0 sudo[73217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:56 compute-0 python3[73219]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 09:30:56 compute-0 sudo[73217]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:56 compute-0 sudo[73245]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbeeersuayjljpeujypibndqtsrhddxe ; /usr/bin/python3'
Nov 25 09:30:56 compute-0 sudo[73245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:56 compute-0 python3[73247]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 09:30:56 compute-0 sudo[73245]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:56 compute-0 sudo[73273]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acaonvoyrxwlfmswkbatlatytyuogxrp ; /usr/bin/python3'
Nov 25 09:30:56 compute-0 sudo[73273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:30:56 compute-0 python3[73275]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:30:56 compute-0 sshd-session[73279]: Accepted publickey for ceph-admin from 192.168.122.100 port 45752 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:30:56 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 09:30:56 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 09:30:56 compute-0 systemd-logind[744]: New session 19 of user ceph-admin.
Nov 25 09:30:56 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 09:30:56 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 25 09:30:56 compute-0 systemd[73283]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:30:56 compute-0 systemd[73283]: Queued start job for default target Main User Target.
Nov 25 09:30:56 compute-0 systemd[73283]: Created slice User Application Slice.
Nov 25 09:30:56 compute-0 systemd[73283]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 09:30:56 compute-0 systemd[73283]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 09:30:56 compute-0 systemd[73283]: Reached target Paths.
Nov 25 09:30:56 compute-0 systemd[73283]: Reached target Timers.
Nov 25 09:30:56 compute-0 systemd[73283]: Starting D-Bus User Message Bus Socket...
Nov 25 09:30:56 compute-0 systemd[73283]: Starting Create User's Volatile Files and Directories...
Nov 25 09:30:56 compute-0 systemd[73283]: Listening on D-Bus User Message Bus Socket.
Nov 25 09:30:56 compute-0 systemd[73283]: Reached target Sockets.
Nov 25 09:30:56 compute-0 systemd[73283]: Finished Create User's Volatile Files and Directories.
Nov 25 09:30:56 compute-0 systemd[73283]: Reached target Basic System.
Nov 25 09:30:56 compute-0 systemd[73283]: Reached target Main User Target.
Nov 25 09:30:56 compute-0 systemd[73283]: Startup finished in 92ms.
Nov 25 09:30:56 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 25 09:30:56 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Nov 25 09:30:57 compute-0 sshd-session[73279]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:30:57 compute-0 sudo[73299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 25 09:30:57 compute-0 sudo[73299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:30:57 compute-0 sudo[73299]: pam_unix(sudo:session): session closed for user root
Nov 25 09:30:57 compute-0 sshd-session[73298]: Received disconnect from 192.168.122.100 port 45752:11: disconnected by user
Nov 25 09:30:57 compute-0 sshd-session[73298]: Disconnected from user ceph-admin 192.168.122.100 port 45752
Nov 25 09:30:57 compute-0 sshd-session[73279]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:30:57 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Nov 25 09:30:57 compute-0 systemd-logind[744]: Session 19 logged out. Waiting for processes to exit.
Nov 25 09:30:57 compute-0 systemd-logind[744]: Removed session 19.
Nov 25 09:30:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:30:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:30:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1544936988-merged.mount: Deactivated successfully.
Nov 25 09:30:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1544936988-lower\x2dmapped.mount: Deactivated successfully.
Nov 25 09:31:07 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 25 09:31:07 compute-0 systemd[73283]: Activating special unit Exit the Session...
Nov 25 09:31:07 compute-0 systemd[73283]: Stopped target Main User Target.
Nov 25 09:31:07 compute-0 systemd[73283]: Stopped target Basic System.
Nov 25 09:31:07 compute-0 systemd[73283]: Stopped target Paths.
Nov 25 09:31:07 compute-0 systemd[73283]: Stopped target Sockets.
Nov 25 09:31:07 compute-0 systemd[73283]: Stopped target Timers.
Nov 25 09:31:07 compute-0 systemd[73283]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 25 09:31:07 compute-0 systemd[73283]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 09:31:07 compute-0 systemd[73283]: Closed D-Bus User Message Bus Socket.
Nov 25 09:31:07 compute-0 systemd[73283]: Stopped Create User's Volatile Files and Directories.
Nov 25 09:31:07 compute-0 systemd[73283]: Removed slice User Application Slice.
Nov 25 09:31:07 compute-0 systemd[73283]: Reached target Shutdown.
Nov 25 09:31:07 compute-0 systemd[73283]: Finished Exit the Session.
Nov 25 09:31:07 compute-0 systemd[73283]: Reached target Exit the Session.
Nov 25 09:31:07 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 25 09:31:07 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 25 09:31:07 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 25 09:31:07 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 25 09:31:07 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 25 09:31:07 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 25 09:31:07 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 25 09:31:13 compute-0 podman[73373]: 2025-11-25 09:31:13.924772178 +0000 UTC m=+16.676614428 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:31:13 compute-0 podman[73423]: 2025-11-25 09:31:13.967930665 +0000 UTC m=+0.026074293 container create 6621882a16c0a16d183dcedb2be22f27d1e49aea67a0d65ba15718874138c3eb (image=quay.io/ceph/ceph:v19, name=cranky_lamarr, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2428809848-merged.mount: Deactivated successfully.
Nov 25 09:31:13 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 25 09:31:13 compute-0 systemd[1]: Started libpod-conmon-6621882a16c0a16d183dcedb2be22f27d1e49aea67a0d65ba15718874138c3eb.scope.
Nov 25 09:31:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:14 compute-0 podman[73423]: 2025-11-25 09:31:14.030729165 +0000 UTC m=+0.088872813 container init 6621882a16c0a16d183dcedb2be22f27d1e49aea67a0d65ba15718874138c3eb (image=quay.io/ceph/ceph:v19, name=cranky_lamarr, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:14 compute-0 podman[73423]: 2025-11-25 09:31:14.035944566 +0000 UTC m=+0.094088194 container start 6621882a16c0a16d183dcedb2be22f27d1e49aea67a0d65ba15718874138c3eb (image=quay.io/ceph/ceph:v19, name=cranky_lamarr, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:31:14 compute-0 podman[73423]: 2025-11-25 09:31:14.037143436 +0000 UTC m=+0.095287064 container attach 6621882a16c0a16d183dcedb2be22f27d1e49aea67a0d65ba15718874138c3eb (image=quay.io/ceph/ceph:v19, name=cranky_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:31:14 compute-0 podman[73423]: 2025-11-25 09:31:13.957262494 +0000 UTC m=+0.015406143 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:14 compute-0 cranky_lamarr[73436]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Nov 25 09:31:14 compute-0 systemd[1]: libpod-6621882a16c0a16d183dcedb2be22f27d1e49aea67a0d65ba15718874138c3eb.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73423]: 2025-11-25 09:31:14.112734181 +0000 UTC m=+0.170877819 container died 6621882a16c0a16d183dcedb2be22f27d1e49aea67a0d65ba15718874138c3eb (image=quay.io/ceph/ceph:v19, name=cranky_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:14 compute-0 podman[73423]: 2025-11-25 09:31:14.130534466 +0000 UTC m=+0.188678094 container remove 6621882a16c0a16d183dcedb2be22f27d1e49aea67a0d65ba15718874138c3eb (image=quay.io/ceph/ceph:v19, name=cranky_lamarr, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:14 compute-0 systemd[1]: libpod-conmon-6621882a16c0a16d183dcedb2be22f27d1e49aea67a0d65ba15718874138c3eb.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73450]: 2025-11-25 09:31:14.170100271 +0000 UTC m=+0.025846504 container create 127ef40c20b4d3a00925cb017533692d15e12b2fe4e16afb00d737235ae7fed1 (image=quay.io/ceph/ceph:v19, name=competent_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:31:14 compute-0 systemd[1]: Started libpod-conmon-127ef40c20b4d3a00925cb017533692d15e12b2fe4e16afb00d737235ae7fed1.scope.
Nov 25 09:31:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:14 compute-0 podman[73450]: 2025-11-25 09:31:14.212074555 +0000 UTC m=+0.067820789 container init 127ef40c20b4d3a00925cb017533692d15e12b2fe4e16afb00d737235ae7fed1 (image=quay.io/ceph/ceph:v19, name=competent_varahamihira, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:14 compute-0 podman[73450]: 2025-11-25 09:31:14.217447655 +0000 UTC m=+0.073193887 container start 127ef40c20b4d3a00925cb017533692d15e12b2fe4e16afb00d737235ae7fed1 (image=quay.io/ceph/ceph:v19, name=competent_varahamihira, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:14 compute-0 podman[73450]: 2025-11-25 09:31:14.218981786 +0000 UTC m=+0.074728019 container attach 127ef40c20b4d3a00925cb017533692d15e12b2fe4e16afb00d737235ae7fed1 (image=quay.io/ceph/ceph:v19, name=competent_varahamihira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:14 compute-0 competent_varahamihira[73464]: 167 167
Nov 25 09:31:14 compute-0 podman[73450]: 2025-11-25 09:31:14.219868327 +0000 UTC m=+0.075614560 container died 127ef40c20b4d3a00925cb017533692d15e12b2fe4e16afb00d737235ae7fed1 (image=quay.io/ceph/ceph:v19, name=competent_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 25 09:31:14 compute-0 systemd[1]: libpod-127ef40c20b4d3a00925cb017533692d15e12b2fe4e16afb00d737235ae7fed1.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73450]: 2025-11-25 09:31:14.236682423 +0000 UTC m=+0.092428656 container remove 127ef40c20b4d3a00925cb017533692d15e12b2fe4e16afb00d737235ae7fed1 (image=quay.io/ceph/ceph:v19, name=competent_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:31:14 compute-0 podman[73450]: 2025-11-25 09:31:14.159673706 +0000 UTC m=+0.015419949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:14 compute-0 systemd[1]: libpod-conmon-127ef40c20b4d3a00925cb017533692d15e12b2fe4e16afb00d737235ae7fed1.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73479]: 2025-11-25 09:31:14.276911339 +0000 UTC m=+0.026253191 container create 8b49c73f1e1726d42b3f5c0eca83ee47ea819ae6c09073b241acdf5ff01f2faf (image=quay.io/ceph/ceph:v19, name=quirky_haibt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:31:14 compute-0 systemd[1]: Started libpod-conmon-8b49c73f1e1726d42b3f5c0eca83ee47ea819ae6c09073b241acdf5ff01f2faf.scope.
Nov 25 09:31:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:14 compute-0 podman[73479]: 2025-11-25 09:31:14.312634219 +0000 UTC m=+0.061976091 container init 8b49c73f1e1726d42b3f5c0eca83ee47ea819ae6c09073b241acdf5ff01f2faf (image=quay.io/ceph/ceph:v19, name=quirky_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:14 compute-0 podman[73479]: 2025-11-25 09:31:14.31636921 +0000 UTC m=+0.065711061 container start 8b49c73f1e1726d42b3f5c0eca83ee47ea819ae6c09073b241acdf5ff01f2faf (image=quay.io/ceph/ceph:v19, name=quirky_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:14 compute-0 podman[73479]: 2025-11-25 09:31:14.317492827 +0000 UTC m=+0.066834680 container attach 8b49c73f1e1726d42b3f5c0eca83ee47ea819ae6c09073b241acdf5ff01f2faf (image=quay.io/ceph/ceph:v19, name=quirky_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:31:14 compute-0 quirky_haibt[73492]: AQBidyVpen7DExAAFWqK79V9BcyClCnoByYoyA==
Nov 25 09:31:14 compute-0 systemd[1]: libpod-8b49c73f1e1726d42b3f5c0eca83ee47ea819ae6c09073b241acdf5ff01f2faf.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73479]: 2025-11-25 09:31:14.333718524 +0000 UTC m=+0.083060376 container died 8b49c73f1e1726d42b3f5c0eca83ee47ea819ae6c09073b241acdf5ff01f2faf (image=quay.io/ceph/ceph:v19, name=quirky_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 25 09:31:14 compute-0 podman[73479]: 2025-11-25 09:31:14.351709448 +0000 UTC m=+0.101051301 container remove 8b49c73f1e1726d42b3f5c0eca83ee47ea819ae6c09073b241acdf5ff01f2faf (image=quay.io/ceph/ceph:v19, name=quirky_haibt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:31:14 compute-0 podman[73479]: 2025-11-25 09:31:14.265842684 +0000 UTC m=+0.015184555 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:14 compute-0 systemd[1]: libpod-conmon-8b49c73f1e1726d42b3f5c0eca83ee47ea819ae6c09073b241acdf5ff01f2faf.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73508]: 2025-11-25 09:31:14.395069777 +0000 UTC m=+0.027258135 container create a2971010c9c2869fb0f89e1ffc1eaac4224d90fb556ada5adcee910672ea9daf (image=quay.io/ceph/ceph:v19, name=friendly_thompson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:31:14 compute-0 systemd[1]: Started libpod-conmon-a2971010c9c2869fb0f89e1ffc1eaac4224d90fb556ada5adcee910672ea9daf.scope.
Nov 25 09:31:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:14 compute-0 podman[73508]: 2025-11-25 09:31:14.435640956 +0000 UTC m=+0.067829314 container init a2971010c9c2869fb0f89e1ffc1eaac4224d90fb556ada5adcee910672ea9daf (image=quay.io/ceph/ceph:v19, name=friendly_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:14 compute-0 podman[73508]: 2025-11-25 09:31:14.439594128 +0000 UTC m=+0.071782487 container start a2971010c9c2869fb0f89e1ffc1eaac4224d90fb556ada5adcee910672ea9daf (image=quay.io/ceph/ceph:v19, name=friendly_thompson, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:31:14 compute-0 podman[73508]: 2025-11-25 09:31:14.441078246 +0000 UTC m=+0.073266604 container attach a2971010c9c2869fb0f89e1ffc1eaac4224d90fb556ada5adcee910672ea9daf (image=quay.io/ceph/ceph:v19, name=friendly_thompson, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 25 09:31:14 compute-0 friendly_thompson[73522]: AQBidyVptWoeGxAA+OaRyHUxUNR/sF58miRarg==
Nov 25 09:31:14 compute-0 systemd[1]: libpod-a2971010c9c2869fb0f89e1ffc1eaac4224d90fb556ada5adcee910672ea9daf.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73508]: 2025-11-25 09:31:14.460040261 +0000 UTC m=+0.092228619 container died a2971010c9c2869fb0f89e1ffc1eaac4224d90fb556ada5adcee910672ea9daf (image=quay.io/ceph/ceph:v19, name=friendly_thompson, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:14 compute-0 podman[73508]: 2025-11-25 09:31:14.475304605 +0000 UTC m=+0.107492964 container remove a2971010c9c2869fb0f89e1ffc1eaac4224d90fb556ada5adcee910672ea9daf (image=quay.io/ceph/ceph:v19, name=friendly_thompson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:14 compute-0 podman[73508]: 2025-11-25 09:31:14.383333984 +0000 UTC m=+0.015522362 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:14 compute-0 systemd[1]: libpod-conmon-a2971010c9c2869fb0f89e1ffc1eaac4224d90fb556ada5adcee910672ea9daf.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73537]: 2025-11-25 09:31:14.516125987 +0000 UTC m=+0.027861191 container create 21fb28aba9ecd36af45941a7537f8fbbc5e1c663e39302483ffc0f97a655a350 (image=quay.io/ceph/ceph:v19, name=serene_taussig, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:14 compute-0 systemd[1]: Started libpod-conmon-21fb28aba9ecd36af45941a7537f8fbbc5e1c663e39302483ffc0f97a655a350.scope.
Nov 25 09:31:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:14 compute-0 podman[73537]: 2025-11-25 09:31:14.553254678 +0000 UTC m=+0.064989882 container init 21fb28aba9ecd36af45941a7537f8fbbc5e1c663e39302483ffc0f97a655a350 (image=quay.io/ceph/ceph:v19, name=serene_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:14 compute-0 podman[73537]: 2025-11-25 09:31:14.556759625 +0000 UTC m=+0.068494819 container start 21fb28aba9ecd36af45941a7537f8fbbc5e1c663e39302483ffc0f97a655a350 (image=quay.io/ceph/ceph:v19, name=serene_taussig, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 25 09:31:14 compute-0 podman[73537]: 2025-11-25 09:31:14.55799245 +0000 UTC m=+0.069727643 container attach 21fb28aba9ecd36af45941a7537f8fbbc5e1c663e39302483ffc0f97a655a350 (image=quay.io/ceph/ceph:v19, name=serene_taussig, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:14 compute-0 serene_taussig[73552]: AQBidyVpDo8YIhAAGkJNOXZMs97gpEQUkiOx6w==
Nov 25 09:31:14 compute-0 systemd[1]: libpod-21fb28aba9ecd36af45941a7537f8fbbc5e1c663e39302483ffc0f97a655a350.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73537]: 2025-11-25 09:31:14.574230328 +0000 UTC m=+0.085965523 container died 21fb28aba9ecd36af45941a7537f8fbbc5e1c663e39302483ffc0f97a655a350 (image=quay.io/ceph/ceph:v19, name=serene_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:31:14 compute-0 podman[73537]: 2025-11-25 09:31:14.594253885 +0000 UTC m=+0.105989078 container remove 21fb28aba9ecd36af45941a7537f8fbbc5e1c663e39302483ffc0f97a655a350 (image=quay.io/ceph/ceph:v19, name=serene_taussig, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:31:14 compute-0 podman[73537]: 2025-11-25 09:31:14.505446946 +0000 UTC m=+0.017182150 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:14 compute-0 systemd[1]: libpod-conmon-21fb28aba9ecd36af45941a7537f8fbbc5e1c663e39302483ffc0f97a655a350.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73568]: 2025-11-25 09:31:14.636696734 +0000 UTC m=+0.028887025 container create 04184eae975ce4fbac23333f8236804e1234257913682339f6034c27d8753a3c (image=quay.io/ceph/ceph:v19, name=festive_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:14 compute-0 systemd[1]: Started libpod-conmon-04184eae975ce4fbac23333f8236804e1234257913682339f6034c27d8753a3c.scope.
Nov 25 09:31:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ccc440d78f7d245abd8e6d9d472a25addb79def040867adf62d6aae32ee704/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:14 compute-0 podman[73568]: 2025-11-25 09:31:14.67152666 +0000 UTC m=+0.063716973 container init 04184eae975ce4fbac23333f8236804e1234257913682339f6034c27d8753a3c (image=quay.io/ceph/ceph:v19, name=festive_nightingale, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 25 09:31:14 compute-0 podman[73568]: 2025-11-25 09:31:14.675625136 +0000 UTC m=+0.067815429 container start 04184eae975ce4fbac23333f8236804e1234257913682339f6034c27d8753a3c (image=quay.io/ceph/ceph:v19, name=festive_nightingale, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:14 compute-0 podman[73568]: 2025-11-25 09:31:14.67677309 +0000 UTC m=+0.068963392 container attach 04184eae975ce4fbac23333f8236804e1234257913682339f6034c27d8753a3c (image=quay.io/ceph/ceph:v19, name=festive_nightingale, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Nov 25 09:31:14 compute-0 festive_nightingale[73582]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 25 09:31:14 compute-0 festive_nightingale[73582]: setting min_mon_release = quincy
Nov 25 09:31:14 compute-0 festive_nightingale[73582]: /usr/bin/monmaptool: set fsid to af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:14 compute-0 festive_nightingale[73582]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 25 09:31:14 compute-0 systemd[1]: libpod-04184eae975ce4fbac23333f8236804e1234257913682339f6034c27d8753a3c.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73568]: 2025-11-25 09:31:14.698365744 +0000 UTC m=+0.090556036 container died 04184eae975ce4fbac23333f8236804e1234257913682339f6034c27d8753a3c (image=quay.io/ceph/ceph:v19, name=festive_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:14 compute-0 podman[73568]: 2025-11-25 09:31:14.713672067 +0000 UTC m=+0.105862360 container remove 04184eae975ce4fbac23333f8236804e1234257913682339f6034c27d8753a3c (image=quay.io/ceph/ceph:v19, name=festive_nightingale, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:31:14 compute-0 podman[73568]: 2025-11-25 09:31:14.626437143 +0000 UTC m=+0.018627435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:14 compute-0 systemd[1]: libpod-conmon-04184eae975ce4fbac23333f8236804e1234257913682339f6034c27d8753a3c.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73598]: 2025-11-25 09:31:14.755403725 +0000 UTC m=+0.024911972 container create 180da351da31e2760cd6e6aee142ad0e4daf64633234bf6dfbef28cd966845ba (image=quay.io/ceph/ceph:v19, name=zen_hodgkin, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:14 compute-0 systemd[1]: Started libpod-conmon-180da351da31e2760cd6e6aee142ad0e4daf64633234bf6dfbef28cd966845ba.scope.
Nov 25 09:31:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87af356928641ade2f2e30d0c372ba30b7a0794741f383f32e3d43c0846d95f7/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87af356928641ade2f2e30d0c372ba30b7a0794741f383f32e3d43c0846d95f7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87af356928641ade2f2e30d0c372ba30b7a0794741f383f32e3d43c0846d95f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87af356928641ade2f2e30d0c372ba30b7a0794741f383f32e3d43c0846d95f7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:14 compute-0 podman[73598]: 2025-11-25 09:31:14.801328457 +0000 UTC m=+0.070836724 container init 180da351da31e2760cd6e6aee142ad0e4daf64633234bf6dfbef28cd966845ba (image=quay.io/ceph/ceph:v19, name=zen_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:14 compute-0 podman[73598]: 2025-11-25 09:31:14.80525547 +0000 UTC m=+0.074763717 container start 180da351da31e2760cd6e6aee142ad0e4daf64633234bf6dfbef28cd966845ba (image=quay.io/ceph/ceph:v19, name=zen_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:31:14 compute-0 podman[73598]: 2025-11-25 09:31:14.80703249 +0000 UTC m=+0.076540737 container attach 180da351da31e2760cd6e6aee142ad0e4daf64633234bf6dfbef28cd966845ba (image=quay.io/ceph/ceph:v19, name=zen_hodgkin, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:31:14 compute-0 podman[73598]: 2025-11-25 09:31:14.745495557 +0000 UTC m=+0.015003814 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:14 compute-0 systemd[1]: libpod-180da351da31e2760cd6e6aee142ad0e4daf64633234bf6dfbef28cd966845ba.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 podman[73598]: 2025-11-25 09:31:14.847727032 +0000 UTC m=+0.117235279 container died 180da351da31e2760cd6e6aee142ad0e4daf64633234bf6dfbef28cd966845ba (image=quay.io/ceph/ceph:v19, name=zen_hodgkin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 09:31:14 compute-0 podman[73598]: 2025-11-25 09:31:14.8658631 +0000 UTC m=+0.135371348 container remove 180da351da31e2760cd6e6aee142ad0e4daf64633234bf6dfbef28cd966845ba (image=quay.io/ceph/ceph:v19, name=zen_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:14 compute-0 systemd[1]: libpod-conmon-180da351da31e2760cd6e6aee142ad0e4daf64633234bf6dfbef28cd966845ba.scope: Deactivated successfully.
Nov 25 09:31:14 compute-0 systemd[1]: Reloading.
Nov 25 09:31:14 compute-0 systemd-sysv-generator[73677]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:31:14 compute-0 systemd-rc-local-generator[73674]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9c7dcc75320d594062c162ed1ee17405442b672879ed9effff2c64ec34243aa-merged.mount: Deactivated successfully.
Nov 25 09:31:15 compute-0 systemd[1]: Reloading.
Nov 25 09:31:15 compute-0 systemd-sysv-generator[73709]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:31:15 compute-0 systemd-rc-local-generator[73706]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:31:15 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 25 09:31:15 compute-0 systemd[1]: Reloading.
Nov 25 09:31:15 compute-0 systemd-rc-local-generator[73746]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:31:15 compute-0 systemd-sysv-generator[73750]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:31:15 compute-0 systemd[1]: Reached target Ceph cluster af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:31:15 compute-0 systemd[1]: Reloading.
Nov 25 09:31:15 compute-0 systemd-rc-local-generator[73784]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:31:15 compute-0 systemd-sysv-generator[73788]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:31:15 compute-0 systemd[1]: Reloading.
Nov 25 09:31:15 compute-0 systemd-sysv-generator[73826]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:31:15 compute-0 systemd-rc-local-generator[73821]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:31:15 compute-0 systemd[1]: Created slice Slice /system/ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:31:15 compute-0 systemd[1]: Reached target System Time Set.
Nov 25 09:31:15 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 25 09:31:15 compute-0 systemd[1]: Starting Ceph mon.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:31:15 compute-0 podman[73879]: 2025-11-25 09:31:15.976682886 +0000 UTC m=+0.026074975 container create c1e48b53cf019f0f8c327ffd55e7316470aacb781664cb2702eca5fe226e9f9b (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d49c4fa42199f051986745b5ecad0de1d4eddb2f8c25ce77666435e2d8a1e4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d49c4fa42199f051986745b5ecad0de1d4eddb2f8c25ce77666435e2d8a1e4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d49c4fa42199f051986745b5ecad0de1d4eddb2f8c25ce77666435e2d8a1e4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d49c4fa42199f051986745b5ecad0de1d4eddb2f8c25ce77666435e2d8a1e4b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 podman[73879]: 2025-11-25 09:31:16.01681115 +0000 UTC m=+0.066203249 container init c1e48b53cf019f0f8c327ffd55e7316470aacb781664cb2702eca5fe226e9f9b (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:16 compute-0 podman[73879]: 2025-11-25 09:31:16.022489826 +0000 UTC m=+0.071881914 container start c1e48b53cf019f0f8c327ffd55e7316470aacb781664cb2702eca5fe226e9f9b (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:16 compute-0 bash[73879]: c1e48b53cf019f0f8c327ffd55e7316470aacb781664cb2702eca5fe226e9f9b
Nov 25 09:31:16 compute-0 podman[73879]: 2025-11-25 09:31:15.965771095 +0000 UTC m=+0.015163194 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:16 compute-0 systemd[1]: Started Ceph mon.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:31:16 compute-0 ceph-mon[73895]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:31:16 compute-0 ceph-mon[73895]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 25 09:31:16 compute-0 ceph-mon[73895]: pidfile_write: ignore empty --pid-file
Nov 25 09:31:16 compute-0 ceph-mon[73895]: load: jerasure load: lrc 
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: RocksDB version: 7.9.2
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Git sha 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: DB SUMMARY
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: DB Session ID:  SOUD1SYU7R1EDGVKP614
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: CURRENT file:  CURRENT
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                         Options.error_if_exists: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                       Options.create_if_missing: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                                     Options.env: 0x5572f202ac20
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                                Options.info_log: 0x5572f3b2ad60
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                              Options.statistics: (nil)
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                               Options.use_fsync: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                              Options.db_log_dir: 
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                                 Options.wal_dir: 
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                    Options.write_buffer_manager: 0x5572f3b2f900
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                  Options.unordered_write: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                               Options.row_cache: None
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                              Options.wal_filter: None
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.two_write_queues: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.wal_compression: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.atomic_flush: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.max_background_jobs: 2
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.max_background_compactions: -1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.max_subcompactions: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                          Options.max_open_files: -1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Compression algorithms supported:
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         kZSTD supported: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         kXpressCompression supported: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         kBZip2Compression supported: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         kLZ4Compression supported: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         kZlibCompression supported: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         kSnappyCompression supported: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:           Options.merge_operator: 
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:        Options.compaction_filter: None
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572f3b2a500)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5572f3b4f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:          Options.compression: NoCompression
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.num_levels: 7
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ea9635bc-b5c0-4bcc-b39b-aa36751871d5
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063076055832, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063076056651, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "SOUD1SYU7R1EDGVKP614", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063076056735, "job": 1, "event": "recovery_finished"}
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5572f3b50e00
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: DB pointer 0x5572f3c5a000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:31:16 compute-0 ceph-mon[73895]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.21 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.21 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5572f3b4f350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 09:31:16 compute-0 ceph-mon[73895]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@-1(???) e0 preinit fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 25 09:31:16 compute-0 ceph-mon[73895]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 25 09:31:16 compute-0 ceph-mon[73895]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(cluster) log [DBG] : monmap epoch 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(cluster) log [DBG] : fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(cluster) log [DBG] : last_changed 2025-11-25T09:31:14.695764+0000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(cluster) log [DBG] : created 2025-11-25T09:31:14.695764+0000
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC 7763 64-Core Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865360,os=Linux}
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).mds e1 new map
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-11-25T09:31:16:071954+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mkfs af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 09:31:16 compute-0 podman[73896]: 2025-11-25 09:31:16.083878478 +0000 UTC m=+0.034922453 container create 556eaf5284b82fe8edf33ac58769b96daabbe41257bd9f0b6067313cb4142aab (image=quay.io/ceph/ceph:v19, name=goofy_turing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 09:31:16 compute-0 systemd[1]: Started libpod-conmon-556eaf5284b82fe8edf33ac58769b96daabbe41257bd9f0b6067313cb4142aab.scope.
Nov 25 09:31:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ccbb9c4ec5fb4d63323d27c645b8c9dbe441319b6aefb20b507e635d1b73aee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ccbb9c4ec5fb4d63323d27c645b8c9dbe441319b6aefb20b507e635d1b73aee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ccbb9c4ec5fb4d63323d27c645b8c9dbe441319b6aefb20b507e635d1b73aee/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 podman[73896]: 2025-11-25 09:31:16.142232429 +0000 UTC m=+0.093276414 container init 556eaf5284b82fe8edf33ac58769b96daabbe41257bd9f0b6067313cb4142aab (image=quay.io/ceph/ceph:v19, name=goofy_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:16 compute-0 podman[73896]: 2025-11-25 09:31:16.146682228 +0000 UTC m=+0.097726203 container start 556eaf5284b82fe8edf33ac58769b96daabbe41257bd9f0b6067313cb4142aab (image=quay.io/ceph/ceph:v19, name=goofy_turing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:31:16 compute-0 podman[73896]: 2025-11-25 09:31:16.147703914 +0000 UTC m=+0.098747889 container attach 556eaf5284b82fe8edf33ac58769b96daabbe41257bd9f0b6067313cb4142aab (image=quay.io/ceph/ceph:v19, name=goofy_turing, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:16 compute-0 podman[73896]: 2025-11-25 09:31:16.072685958 +0000 UTC m=+0.023729954 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3018403518' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 09:31:16 compute-0 goofy_turing[73947]:   cluster:
Nov 25 09:31:16 compute-0 goofy_turing[73947]:     id:     af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:16 compute-0 goofy_turing[73947]:     health: HEALTH_OK
Nov 25 09:31:16 compute-0 goofy_turing[73947]:  
Nov 25 09:31:16 compute-0 goofy_turing[73947]:   services:
Nov 25 09:31:16 compute-0 goofy_turing[73947]:     mon: 1 daemons, quorum compute-0 (age 0.215064s)
Nov 25 09:31:16 compute-0 goofy_turing[73947]:     mgr: no daemons active
Nov 25 09:31:16 compute-0 goofy_turing[73947]:     osd: 0 osds: 0 up, 0 in
Nov 25 09:31:16 compute-0 goofy_turing[73947]:  
Nov 25 09:31:16 compute-0 goofy_turing[73947]:   data:
Nov 25 09:31:16 compute-0 goofy_turing[73947]:     pools:   0 pools, 0 pgs
Nov 25 09:31:16 compute-0 goofy_turing[73947]:     objects: 0 objects, 0 B
Nov 25 09:31:16 compute-0 goofy_turing[73947]:     usage:   0 B used, 0 B / 0 B avail
Nov 25 09:31:16 compute-0 goofy_turing[73947]:     pgs:     
Nov 25 09:31:16 compute-0 goofy_turing[73947]:  
Nov 25 09:31:16 compute-0 podman[73896]: 2025-11-25 09:31:16.302995161 +0000 UTC m=+0.254039135 container died 556eaf5284b82fe8edf33ac58769b96daabbe41257bd9f0b6067313cb4142aab (image=quay.io/ceph/ceph:v19, name=goofy_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:31:16 compute-0 systemd[1]: libpod-556eaf5284b82fe8edf33ac58769b96daabbe41257bd9f0b6067313cb4142aab.scope: Deactivated successfully.
Nov 25 09:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ccbb9c4ec5fb4d63323d27c645b8c9dbe441319b6aefb20b507e635d1b73aee-merged.mount: Deactivated successfully.
Nov 25 09:31:16 compute-0 podman[73896]: 2025-11-25 09:31:16.321600193 +0000 UTC m=+0.272644167 container remove 556eaf5284b82fe8edf33ac58769b96daabbe41257bd9f0b6067313cb4142aab (image=quay.io/ceph/ceph:v19, name=goofy_turing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:16 compute-0 systemd[1]: libpod-conmon-556eaf5284b82fe8edf33ac58769b96daabbe41257bd9f0b6067313cb4142aab.scope: Deactivated successfully.
Nov 25 09:31:16 compute-0 podman[73983]: 2025-11-25 09:31:16.362377632 +0000 UTC m=+0.025668970 container create 639960ae09b27635ff014d4dc27e5e3bd90dfc1bdc0c87419cb0d150270cf385 (image=quay.io/ceph/ceph:v19, name=bold_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:16 compute-0 systemd[1]: Started libpod-conmon-639960ae09b27635ff014d4dc27e5e3bd90dfc1bdc0c87419cb0d150270cf385.scope.
Nov 25 09:31:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed9ffdf27bf8ae2cd4148795aa98d480418ee4cb0ae52647a315d1ff77d8e74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed9ffdf27bf8ae2cd4148795aa98d480418ee4cb0ae52647a315d1ff77d8e74/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed9ffdf27bf8ae2cd4148795aa98d480418ee4cb0ae52647a315d1ff77d8e74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed9ffdf27bf8ae2cd4148795aa98d480418ee4cb0ae52647a315d1ff77d8e74/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 podman[73983]: 2025-11-25 09:31:16.412816042 +0000 UTC m=+0.076107380 container init 639960ae09b27635ff014d4dc27e5e3bd90dfc1bdc0c87419cb0d150270cf385 (image=quay.io/ceph/ceph:v19, name=bold_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:31:16 compute-0 podman[73983]: 2025-11-25 09:31:16.418667792 +0000 UTC m=+0.081959121 container start 639960ae09b27635ff014d4dc27e5e3bd90dfc1bdc0c87419cb0d150270cf385 (image=quay.io/ceph/ceph:v19, name=bold_merkle, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:31:16 compute-0 podman[73983]: 2025-11-25 09:31:16.419959948 +0000 UTC m=+0.083251286 container attach 639960ae09b27635ff014d4dc27e5e3bd90dfc1bdc0c87419cb0d150270cf385 (image=quay.io/ceph/ceph:v19, name=bold_merkle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:31:16 compute-0 podman[73983]: 2025-11-25 09:31:16.352438825 +0000 UTC m=+0.015730183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2582463800' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2582463800' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 09:31:16 compute-0 bold_merkle[73997]: 
Nov 25 09:31:16 compute-0 bold_merkle[73997]: [global]
Nov 25 09:31:16 compute-0 bold_merkle[73997]:         fsid = af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:16 compute-0 bold_merkle[73997]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 25 09:31:16 compute-0 systemd[1]: libpod-639960ae09b27635ff014d4dc27e5e3bd90dfc1bdc0c87419cb0d150270cf385.scope: Deactivated successfully.
Nov 25 09:31:16 compute-0 podman[74023]: 2025-11-25 09:31:16.600878665 +0000 UTC m=+0.016678732 container died 639960ae09b27635ff014d4dc27e5e3bd90dfc1bdc0c87419cb0d150270cf385 (image=quay.io/ceph/ceph:v19, name=bold_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ed9ffdf27bf8ae2cd4148795aa98d480418ee4cb0ae52647a315d1ff77d8e74-merged.mount: Deactivated successfully.
Nov 25 09:31:16 compute-0 podman[74023]: 2025-11-25 09:31:16.61779314 +0000 UTC m=+0.033593187 container remove 639960ae09b27635ff014d4dc27e5e3bd90dfc1bdc0c87419cb0d150270cf385 (image=quay.io/ceph/ceph:v19, name=bold_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:16 compute-0 systemd[1]: libpod-conmon-639960ae09b27635ff014d4dc27e5e3bd90dfc1bdc0c87419cb0d150270cf385.scope: Deactivated successfully.
Nov 25 09:31:16 compute-0 podman[74034]: 2025-11-25 09:31:16.658792547 +0000 UTC m=+0.023727549 container create 247c77b82ec24c397e2e1b48327062f2df22c702dfb28f20c52cd00731c3c70c (image=quay.io/ceph/ceph:v19, name=heuristic_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 09:31:16 compute-0 systemd[1]: Started libpod-conmon-247c77b82ec24c397e2e1b48327062f2df22c702dfb28f20c52cd00731c3c70c.scope.
Nov 25 09:31:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15aa28601f83b136bb1a097300d7520b19d75795247329d579344ab306bdaad1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15aa28601f83b136bb1a097300d7520b19d75795247329d579344ab306bdaad1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15aa28601f83b136bb1a097300d7520b19d75795247329d579344ab306bdaad1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15aa28601f83b136bb1a097300d7520b19d75795247329d579344ab306bdaad1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:16 compute-0 podman[74034]: 2025-11-25 09:31:16.704239668 +0000 UTC m=+0.069174661 container init 247c77b82ec24c397e2e1b48327062f2df22c702dfb28f20c52cd00731c3c70c (image=quay.io/ceph/ceph:v19, name=heuristic_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 09:31:16 compute-0 podman[74034]: 2025-11-25 09:31:16.709775575 +0000 UTC m=+0.074710566 container start 247c77b82ec24c397e2e1b48327062f2df22c702dfb28f20c52cd00731c3c70c (image=quay.io/ceph/ceph:v19, name=heuristic_goldwasser, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:16 compute-0 podman[74034]: 2025-11-25 09:31:16.710864798 +0000 UTC m=+0.075799780 container attach 247c77b82ec24c397e2e1b48327062f2df22c702dfb28f20c52cd00731c3c70c (image=quay.io/ceph/ceph:v19, name=heuristic_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:16 compute-0 podman[74034]: 2025-11-25 09:31:16.649736446 +0000 UTC m=+0.014671458 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:16 compute-0 ceph-mon[73895]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:31:16 compute-0 ceph-mon[73895]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2024151989' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:16 compute-0 systemd[1]: libpod-247c77b82ec24c397e2e1b48327062f2df22c702dfb28f20c52cd00731c3c70c.scope: Deactivated successfully.
Nov 25 09:31:16 compute-0 podman[74074]: 2025-11-25 09:31:16.884693529 +0000 UTC m=+0.014472864 container died 247c77b82ec24c397e2e1b48327062f2df22c702dfb28f20c52cd00731c3c70c (image=quay.io/ceph/ceph:v19, name=heuristic_goldwasser, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:16 compute-0 podman[74074]: 2025-11-25 09:31:16.901581193 +0000 UTC m=+0.031360528 container remove 247c77b82ec24c397e2e1b48327062f2df22c702dfb28f20c52cd00731c3c70c (image=quay.io/ceph/ceph:v19, name=heuristic_goldwasser, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:31:16 compute-0 systemd[1]: libpod-conmon-247c77b82ec24c397e2e1b48327062f2df22c702dfb28f20c52cd00731c3c70c.scope: Deactivated successfully.
Nov 25 09:31:16 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:31:16 compute-0 chronyd[58457]: Selected source 142.202.190.19 (pool.ntp.org)
Nov 25 09:31:17 compute-0 ceph-mon[73895]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 25 09:31:17 compute-0 ceph-mon[73895]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 25 09:31:17 compute-0 ceph-mon[73895]: mon.compute-0@0(leader) e1 shutdown
Nov 25 09:31:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0[73891]: 2025-11-25T09:31:17.021+0000 7f0353a7a640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 25 09:31:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0[73891]: 2025-11-25T09:31:17.021+0000 7f0353a7a640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 25 09:31:17 compute-0 ceph-mon[73895]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 09:31:17 compute-0 ceph-mon[73895]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 09:31:17 compute-0 podman[74109]: 2025-11-25 09:31:17.285480603 +0000 UTC m=+0.284299454 container died c1e48b53cf019f0f8c327ffd55e7316470aacb781664cb2702eca5fe226e9f9b (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d49c4fa42199f051986745b5ecad0de1d4eddb2f8c25ce77666435e2d8a1e4b-merged.mount: Deactivated successfully.
Nov 25 09:31:17 compute-0 podman[74109]: 2025-11-25 09:31:17.302277818 +0000 UTC m=+0.301096658 container remove c1e48b53cf019f0f8c327ffd55e7316470aacb781664cb2702eca5fe226e9f9b (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:31:17 compute-0 bash[74109]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0
Nov 25 09:31:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 09:31:17 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@mon.compute-0.service: Deactivated successfully.
Nov 25 09:31:17 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:31:17 compute-0 systemd[1]: Starting Ceph mon.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:31:17 compute-0 podman[74191]: 2025-11-25 09:31:17.522481415 +0000 UTC m=+0.026774778 container create f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d170ce43e3747594413b0144c61ba512af91e8ea7556bf6e734d249724bd84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d170ce43e3747594413b0144c61ba512af91e8ea7556bf6e734d249724bd84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d170ce43e3747594413b0144c61ba512af91e8ea7556bf6e734d249724bd84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d170ce43e3747594413b0144c61ba512af91e8ea7556bf6e734d249724bd84/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:17 compute-0 podman[74191]: 2025-11-25 09:31:17.55939956 +0000 UTC m=+0.063692923 container init f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 25 09:31:17 compute-0 podman[74191]: 2025-11-25 09:31:17.563482995 +0000 UTC m=+0.067776347 container start f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:31:17 compute-0 bash[74191]: f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d
Nov 25 09:31:17 compute-0 podman[74191]: 2025-11-25 09:31:17.512230319 +0000 UTC m=+0.016523691 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:17 compute-0 systemd[1]: Started Ceph mon.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:31:17 compute-0 ceph-mon[74207]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:31:17 compute-0 ceph-mon[74207]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 25 09:31:17 compute-0 ceph-mon[74207]: pidfile_write: ignore empty --pid-file
Nov 25 09:31:17 compute-0 ceph-mon[74207]: load: jerasure load: lrc 
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: RocksDB version: 7.9.2
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Git sha 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: DB SUMMARY
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: DB Session ID:  YG22O1RMAUAP611HLIK5
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: CURRENT file:  CURRENT
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 46813 ; 
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                         Options.error_if_exists: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                       Options.create_if_missing: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                                     Options.env: 0x55e6ac224c20
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                                Options.info_log: 0x55e6ae54ee20
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                              Options.statistics: (nil)
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                               Options.use_fsync: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                              Options.db_log_dir: 
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                                 Options.wal_dir: 
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                    Options.write_buffer_manager: 0x55e6ae553900
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                  Options.unordered_write: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                               Options.row_cache: None
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                              Options.wal_filter: None
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.two_write_queues: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.wal_compression: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.atomic_flush: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.max_background_jobs: 2
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.max_background_compactions: -1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.max_subcompactions: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                          Options.max_open_files: -1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Compression algorithms supported:
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         kZSTD supported: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         kXpressCompression supported: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         kBZip2Compression supported: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         kLZ4Compression supported: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         kZlibCompression supported: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         kSnappyCompression supported: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:           Options.merge_operator: 
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:        Options.compaction_filter: None
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e6ae54eaa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e6ae573350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:          Options.compression: NoCompression
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.num_levels: 7
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ea9635bc-b5c0-4bcc-b39b-aa36751871d5
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063077591987, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063077593843, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 46708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 117, "table_properties": {"data_size": 45279, "index_size": 135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2753, "raw_average_key_size": 31, "raw_value_size": 43072, "raw_average_value_size": 489, "num_data_blocks": 7, "num_entries": 88, "num_filter_entries": 88, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063077, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063077593946, "job": 1, "event": "recovery_finished"}
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e6ae574e00
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: DB pointer 0x55e6ae67e000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:31:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   47.51 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     27.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0   47.51 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     27.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     27.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     27.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 6.54 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 6.54 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e6ae573350#2 capacity: 512.00 MB usage: 0.75 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.33 KB,6.25849e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 09:31:17 compute-0 ceph-mon[74207]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@-1(???) e1 preinit fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@-1(???).mds e1 new map
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-11-25T09:31:16:071954+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 25 09:31:17 compute-0 ceph-mon[74207]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:31:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 09:31:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : monmap epoch 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : last_changed 2025-11-25T09:31:14.695764+0000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : created 2025-11-25T09:31:14.695764+0000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 25 09:31:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:31:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 09:31:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 25 09:31:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 25 09:31:17 compute-0 podman[74208]: 2025-11-25 09:31:17.611222023 +0000 UTC m=+0.028724522 container create 83db0ca1728a0db741734860f77897f0683f1b6b929bbc7d76405535cc5524f1 (image=quay.io/ceph/ceph:v19, name=gallant_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:31:17 compute-0 systemd[1]: Started libpod-conmon-83db0ca1728a0db741734860f77897f0683f1b6b929bbc7d76405535cc5524f1.scope.
Nov 25 09:31:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd16555defad8179e82353bd10097473e8d792d33763ae7398945d7d9e5a140/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd16555defad8179e82353bd10097473e8d792d33763ae7398945d7d9e5a140/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd16555defad8179e82353bd10097473e8d792d33763ae7398945d7d9e5a140/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:17 compute-0 podman[74208]: 2025-11-25 09:31:17.667379677 +0000 UTC m=+0.084882197 container init 83db0ca1728a0db741734860f77897f0683f1b6b929bbc7d76405535cc5524f1 (image=quay.io/ceph/ceph:v19, name=gallant_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 09:31:17 compute-0 ceph-mon[74207]: monmap epoch 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:17 compute-0 ceph-mon[74207]: last_changed 2025-11-25T09:31:14.695764+0000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: created 2025-11-25T09:31:14.695764+0000
Nov 25 09:31:17 compute-0 ceph-mon[74207]: min_mon_release 19 (squid)
Nov 25 09:31:17 compute-0 ceph-mon[74207]: election_strategy: 1
Nov 25 09:31:17 compute-0 ceph-mon[74207]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 25 09:31:17 compute-0 ceph-mon[74207]: fsmap 
Nov 25 09:31:17 compute-0 ceph-mon[74207]: osdmap e1: 0 total, 0 up, 0 in
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mgrmap e1: no daemons active
Nov 25 09:31:17 compute-0 podman[74208]: 2025-11-25 09:31:17.671728442 +0000 UTC m=+0.089230942 container start 83db0ca1728a0db741734860f77897f0683f1b6b929bbc7d76405535cc5524f1 (image=quay.io/ceph/ceph:v19, name=gallant_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:17 compute-0 podman[74208]: 2025-11-25 09:31:17.67285957 +0000 UTC m=+0.090362070 container attach 83db0ca1728a0db741734860f77897f0683f1b6b929bbc7d76405535cc5524f1 (image=quay.io/ceph/ceph:v19, name=gallant_kare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 09:31:17 compute-0 podman[74208]: 2025-11-25 09:31:17.600684498 +0000 UTC m=+0.018187018 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Nov 25 09:31:17 compute-0 systemd[1]: libpod-83db0ca1728a0db741734860f77897f0683f1b6b929bbc7d76405535cc5524f1.scope: Deactivated successfully.
Nov 25 09:31:17 compute-0 podman[74285]: 2025-11-25 09:31:17.863421627 +0000 UTC m=+0.015722954 container died 83db0ca1728a0db741734860f77897f0683f1b6b929bbc7d76405535cc5524f1 (image=quay.io/ceph/ceph:v19, name=gallant_kare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:17 compute-0 podman[74285]: 2025-11-25 09:31:17.879437375 +0000 UTC m=+0.031738682 container remove 83db0ca1728a0db741734860f77897f0683f1b6b929bbc7d76405535cc5524f1 (image=quay.io/ceph/ceph:v19, name=gallant_kare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:31:17 compute-0 systemd[1]: libpod-conmon-83db0ca1728a0db741734860f77897f0683f1b6b929bbc7d76405535cc5524f1.scope: Deactivated successfully.
Nov 25 09:31:17 compute-0 podman[74297]: 2025-11-25 09:31:17.921337365 +0000 UTC m=+0.025030917 container create 4fee224fe068121ba0ac5472048d9a9ea8b3b14a4c3f4ebb5216e8dde2318ede (image=quay.io/ceph/ceph:v19, name=boring_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:17 compute-0 systemd[1]: Started libpod-conmon-4fee224fe068121ba0ac5472048d9a9ea8b3b14a4c3f4ebb5216e8dde2318ede.scope.
Nov 25 09:31:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adef8f59b462cc583a7edd2dfee28c0446994b0b597e8c1fdb7c48f8314b5a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adef8f59b462cc583a7edd2dfee28c0446994b0b597e8c1fdb7c48f8314b5a4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adef8f59b462cc583a7edd2dfee28c0446994b0b597e8c1fdb7c48f8314b5a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:17 compute-0 podman[74297]: 2025-11-25 09:31:17.977880591 +0000 UTC m=+0.081574153 container init 4fee224fe068121ba0ac5472048d9a9ea8b3b14a4c3f4ebb5216e8dde2318ede (image=quay.io/ceph/ceph:v19, name=boring_mestorf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 09:31:17 compute-0 podman[74297]: 2025-11-25 09:31:17.98158483 +0000 UTC m=+0.085278393 container start 4fee224fe068121ba0ac5472048d9a9ea8b3b14a4c3f4ebb5216e8dde2318ede (image=quay.io/ceph/ceph:v19, name=boring_mestorf, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 25 09:31:17 compute-0 podman[74297]: 2025-11-25 09:31:17.982726463 +0000 UTC m=+0.086420026 container attach 4fee224fe068121ba0ac5472048d9a9ea8b3b14a4c3f4ebb5216e8dde2318ede (image=quay.io/ceph/ceph:v19, name=boring_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:31:18 compute-0 podman[74297]: 2025-11-25 09:31:17.911007652 +0000 UTC m=+0.014701224 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Nov 25 09:31:18 compute-0 systemd[1]: libpod-4fee224fe068121ba0ac5472048d9a9ea8b3b14a4c3f4ebb5216e8dde2318ede.scope: Deactivated successfully.
Nov 25 09:31:18 compute-0 podman[74297]: 2025-11-25 09:31:18.131330783 +0000 UTC m=+0.235024335 container died 4fee224fe068121ba0ac5472048d9a9ea8b3b14a4c3f4ebb5216e8dde2318ede (image=quay.io/ceph/ceph:v19, name=boring_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 25 09:31:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8adef8f59b462cc583a7edd2dfee28c0446994b0b597e8c1fdb7c48f8314b5a4-merged.mount: Deactivated successfully.
Nov 25 09:31:18 compute-0 podman[74297]: 2025-11-25 09:31:18.150291296 +0000 UTC m=+0.253984858 container remove 4fee224fe068121ba0ac5472048d9a9ea8b3b14a4c3f4ebb5216e8dde2318ede (image=quay.io/ceph/ceph:v19, name=boring_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:31:18 compute-0 systemd[1]: libpod-conmon-4fee224fe068121ba0ac5472048d9a9ea8b3b14a4c3f4ebb5216e8dde2318ede.scope: Deactivated successfully.
Nov 25 09:31:18 compute-0 systemd[1]: Reloading.
Nov 25 09:31:18 compute-0 systemd-sysv-generator[74374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:31:18 compute-0 systemd-rc-local-generator[74366]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:31:18 compute-0 systemd[1]: Reloading.
Nov 25 09:31:18 compute-0 systemd-rc-local-generator[74406]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:31:18 compute-0 systemd-sysv-generator[74409]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:31:18 compute-0 systemd[1]: Starting Ceph mgr.compute-0.zcfgby for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:31:18 compute-0 podman[74460]: 2025-11-25 09:31:18.702436653 +0000 UTC m=+0.028135347 container create b4c97af4a954d2de944420a4d59d6686ddbdbb4f2ab2cb66326d7baa7017dc3e (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f567b2a3e51b5c115ee57a1b5457c000aba192b6baf6db47c234da88fa76ca26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f567b2a3e51b5c115ee57a1b5457c000aba192b6baf6db47c234da88fa76ca26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f567b2a3e51b5c115ee57a1b5457c000aba192b6baf6db47c234da88fa76ca26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f567b2a3e51b5c115ee57a1b5457c000aba192b6baf6db47c234da88fa76ca26/merged/var/lib/ceph/mgr/ceph-compute-0.zcfgby supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:18 compute-0 podman[74460]: 2025-11-25 09:31:18.748208505 +0000 UTC m=+0.073907208 container init b4c97af4a954d2de944420a4d59d6686ddbdbb4f2ab2cb66326d7baa7017dc3e (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 09:31:18 compute-0 podman[74460]: 2025-11-25 09:31:18.751956642 +0000 UTC m=+0.077655335 container start b4c97af4a954d2de944420a4d59d6686ddbdbb4f2ab2cb66326d7baa7017dc3e (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Nov 25 09:31:18 compute-0 bash[74460]: b4c97af4a954d2de944420a4d59d6686ddbdbb4f2ab2cb66326d7baa7017dc3e
Nov 25 09:31:18 compute-0 podman[74460]: 2025-11-25 09:31:18.690971083 +0000 UTC m=+0.016669786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:18 compute-0 systemd[1]: Started Ceph mgr.compute-0.zcfgby for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:31:18 compute-0 ceph-mgr[74476]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:31:18 compute-0 ceph-mgr[74476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 09:31:18 compute-0 ceph-mgr[74476]: pidfile_write: ignore empty --pid-file
Nov 25 09:31:18 compute-0 podman[74477]: 2025-11-25 09:31:18.806479905 +0000 UTC m=+0.033667973 container create 91d0eebdc23edd9f74cf0cd8593f814c3c256a682122cd52ff1afebf5e0ad9b8 (image=quay.io/ceph/ceph:v19, name=magical_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:31:18 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'alerts'
Nov 25 09:31:18 compute-0 systemd[1]: Started libpod-conmon-91d0eebdc23edd9f74cf0cd8593f814c3c256a682122cd52ff1afebf5e0ad9b8.scope.
Nov 25 09:31:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41724becc020d213af1c3547ef34fa56618f1d01dc0eb96643a82949e9df543b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41724becc020d213af1c3547ef34fa56618f1d01dc0eb96643a82949e9df543b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41724becc020d213af1c3547ef34fa56618f1d01dc0eb96643a82949e9df543b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:18 compute-0 podman[74477]: 2025-11-25 09:31:18.873512425 +0000 UTC m=+0.100700503 container init 91d0eebdc23edd9f74cf0cd8593f814c3c256a682122cd52ff1afebf5e0ad9b8 (image=quay.io/ceph/ceph:v19, name=magical_hugle, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:18 compute-0 podman[74477]: 2025-11-25 09:31:18.877974153 +0000 UTC m=+0.105162212 container start 91d0eebdc23edd9f74cf0cd8593f814c3c256a682122cd52ff1afebf5e0ad9b8 (image=quay.io/ceph/ceph:v19, name=magical_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:18 compute-0 podman[74477]: 2025-11-25 09:31:18.879169126 +0000 UTC m=+0.106357184 container attach 91d0eebdc23edd9f74cf0cd8593f814c3c256a682122cd52ff1afebf5e0ad9b8 (image=quay.io/ceph/ceph:v19, name=magical_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:31:18 compute-0 podman[74477]: 2025-11-25 09:31:18.795468164 +0000 UTC m=+0.022656232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:18 compute-0 ceph-mgr[74476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:31:18 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'balancer'
Nov 25 09:31:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:18.902+0000 7f0eebe73140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:31:18 compute-0 ceph-mgr[74476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:31:18 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'cephadm'
Nov 25 09:31:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:18.974+0000 7f0eebe73140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:31:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 25 09:31:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1305137805' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 09:31:19 compute-0 magical_hugle[74511]: 
Nov 25 09:31:19 compute-0 magical_hugle[74511]: {
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "health": {
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "status": "HEALTH_OK",
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "checks": {},
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "mutes": []
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     },
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "election_epoch": 5,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "quorum": [
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         0
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     ],
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "quorum_names": [
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "compute-0"
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     ],
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "quorum_age": 1,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "monmap": {
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "epoch": 1,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "min_mon_release_name": "squid",
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "num_mons": 1
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     },
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "osdmap": {
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "epoch": 1,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "num_osds": 0,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "num_up_osds": 0,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "osd_up_since": 0,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "num_in_osds": 0,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "osd_in_since": 0,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "num_remapped_pgs": 0
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     },
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "pgmap": {
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "pgs_by_state": [],
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "num_pgs": 0,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "num_pools": 0,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "num_objects": 0,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "data_bytes": 0,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "bytes_used": 0,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "bytes_avail": 0,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "bytes_total": 0
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     },
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "fsmap": {
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "epoch": 1,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "btime": "2025-11-25T09:31:16:071954+0000",
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "by_rank": [],
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "up:standby": 0
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     },
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "mgrmap": {
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "available": false,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "num_standbys": 0,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "modules": [
Nov 25 09:31:19 compute-0 magical_hugle[74511]:             "iostat",
Nov 25 09:31:19 compute-0 magical_hugle[74511]:             "nfs",
Nov 25 09:31:19 compute-0 magical_hugle[74511]:             "restful"
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         ],
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "services": {}
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     },
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "servicemap": {
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "epoch": 1,
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "modified": "2025-11-25T09:31:16.073226+0000",
Nov 25 09:31:19 compute-0 magical_hugle[74511]:         "services": {}
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     },
Nov 25 09:31:19 compute-0 magical_hugle[74511]:     "progress_events": {}
Nov 25 09:31:19 compute-0 magical_hugle[74511]: }
Nov 25 09:31:19 compute-0 systemd[1]: libpod-91d0eebdc23edd9f74cf0cd8593f814c3c256a682122cd52ff1afebf5e0ad9b8.scope: Deactivated successfully.
Nov 25 09:31:19 compute-0 conmon[74511]: conmon 91d0eebdc23edd9f74cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91d0eebdc23edd9f74cf0cd8593f814c3c256a682122cd52ff1afebf5e0ad9b8.scope/container/memory.events
Nov 25 09:31:19 compute-0 podman[74477]: 2025-11-25 09:31:19.032379894 +0000 UTC m=+0.259567952 container died 91d0eebdc23edd9f74cf0cd8593f814c3c256a682122cd52ff1afebf5e0ad9b8 (image=quay.io/ceph/ceph:v19, name=magical_hugle, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-41724becc020d213af1c3547ef34fa56618f1d01dc0eb96643a82949e9df543b-merged.mount: Deactivated successfully.
Nov 25 09:31:19 compute-0 podman[74477]: 2025-11-25 09:31:19.055534983 +0000 UTC m=+0.282723041 container remove 91d0eebdc23edd9f74cf0cd8593f814c3c256a682122cd52ff1afebf5e0ad9b8 (image=quay.io/ceph/ceph:v19, name=magical_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 09:31:19 compute-0 systemd[1]: libpod-conmon-91d0eebdc23edd9f74cf0cd8593f814c3c256a682122cd52ff1afebf5e0ad9b8.scope: Deactivated successfully.
Nov 25 09:31:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1305137805' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 09:31:19 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'crash'
Nov 25 09:31:19 compute-0 ceph-mgr[74476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:31:19 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'dashboard'
Nov 25 09:31:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:19.621+0000 7f0eebe73140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:31:20 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'devicehealth'
Nov 25 09:31:20 compute-0 ceph-mgr[74476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:31:20 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 09:31:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:20.131+0000 7f0eebe73140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:31:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 09:31:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 09:31:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   from numpy import show_config as show_numpy_config
Nov 25 09:31:20 compute-0 ceph-mgr[74476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:31:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:20.270+0000 7f0eebe73140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:31:20 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'influx'
Nov 25 09:31:20 compute-0 ceph-mgr[74476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:31:20 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'insights'
Nov 25 09:31:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:20.332+0000 7f0eebe73140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:31:20 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'iostat'
Nov 25 09:31:20 compute-0 ceph-mgr[74476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:31:20 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'k8sevents'
Nov 25 09:31:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:20.449+0000 7f0eebe73140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:31:20 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'localpool'
Nov 25 09:31:20 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'mirroring'
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'nfs'
Nov 25 09:31:21 compute-0 podman[74558]: 2025-11-25 09:31:21.094068557 +0000 UTC m=+0.020248081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'orchestrator'
Nov 25 09:31:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:21.264+0000 7f0eebe73140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 podman[74558]: 2025-11-25 09:31:21.311960661 +0000 UTC m=+0.238140164 container create de83cfdc6a0145e3f9c4c4b1ea9fdb234abf51b3ae86cc3fe0cad961cdd11399 (image=quay.io/ceph/ceph:v19, name=crazy_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:21 compute-0 systemd[1]: Started libpod-conmon-de83cfdc6a0145e3f9c4c4b1ea9fdb234abf51b3ae86cc3fe0cad961cdd11399.scope.
Nov 25 09:31:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce4001efb995437e89e72724d7c59d4f3c8d3dff7ed483ceea31262e6a7cfaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce4001efb995437e89e72724d7c59d4f3c8d3dff7ed483ceea31262e6a7cfaa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce4001efb995437e89e72724d7c59d4f3c8d3dff7ed483ceea31262e6a7cfaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:21 compute-0 podman[74558]: 2025-11-25 09:31:21.366926385 +0000 UTC m=+0.293105890 container init de83cfdc6a0145e3f9c4c4b1ea9fdb234abf51b3ae86cc3fe0cad961cdd11399 (image=quay.io/ceph/ceph:v19, name=crazy_boyd, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:21 compute-0 podman[74558]: 2025-11-25 09:31:21.371256929 +0000 UTC m=+0.297436433 container start de83cfdc6a0145e3f9c4c4b1ea9fdb234abf51b3ae86cc3fe0cad961cdd11399 (image=quay.io/ceph/ceph:v19, name=crazy_boyd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:31:21 compute-0 podman[74558]: 2025-11-25 09:31:21.372464405 +0000 UTC m=+0.298643909 container attach de83cfdc6a0145e3f9c4c4b1ea9fdb234abf51b3ae86cc3fe0cad961cdd11399 (image=quay.io/ceph/ceph:v19, name=crazy_boyd, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 09:31:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:21.446+0000 7f0eebe73140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 25 09:31:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1641497151' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 09:31:21 compute-0 crazy_boyd[74572]: 
Nov 25 09:31:21 compute-0 crazy_boyd[74572]: {
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "health": {
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "status": "HEALTH_OK",
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "checks": {},
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "mutes": []
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     },
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "election_epoch": 5,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "quorum": [
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         0
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     ],
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "quorum_names": [
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "compute-0"
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     ],
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "quorum_age": 3,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "monmap": {
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "epoch": 1,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "min_mon_release_name": "squid",
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "num_mons": 1
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     },
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "osdmap": {
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "epoch": 1,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "num_osds": 0,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "num_up_osds": 0,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "osd_up_since": 0,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "num_in_osds": 0,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "osd_in_since": 0,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "num_remapped_pgs": 0
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     },
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "pgmap": {
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "pgs_by_state": [],
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "num_pgs": 0,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "num_pools": 0,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "num_objects": 0,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "data_bytes": 0,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "bytes_used": 0,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "bytes_avail": 0,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "bytes_total": 0
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     },
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "fsmap": {
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "epoch": 1,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "btime": "2025-11-25T09:31:16:071954+0000",
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "by_rank": [],
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "up:standby": 0
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     },
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "mgrmap": {
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "available": false,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "num_standbys": 0,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "modules": [
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:             "iostat",
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:             "nfs",
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:             "restful"
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         ],
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "services": {}
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     },
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "servicemap": {
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "epoch": 1,
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "modified": "2025-11-25T09:31:16.073226+0000",
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:         "services": {}
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     },
Nov 25 09:31:21 compute-0 crazy_boyd[74572]:     "progress_events": {}
Nov 25 09:31:21 compute-0 crazy_boyd[74572]: }
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'osd_support'
Nov 25 09:31:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:21.509+0000 7f0eebe73140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 systemd[1]: libpod-de83cfdc6a0145e3f9c4c4b1ea9fdb234abf51b3ae86cc3fe0cad961cdd11399.scope: Deactivated successfully.
Nov 25 09:31:21 compute-0 conmon[74572]: conmon de83cfdc6a0145e3f9c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de83cfdc6a0145e3f9c4c4b1ea9fdb234abf51b3ae86cc3fe0cad961cdd11399.scope/container/memory.events
Nov 25 09:31:21 compute-0 podman[74558]: 2025-11-25 09:31:21.517761572 +0000 UTC m=+0.443941076 container died de83cfdc6a0145e3f9c4c4b1ea9fdb234abf51b3ae86cc3fe0cad961cdd11399 (image=quay.io/ceph/ceph:v19, name=crazy_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-cce4001efb995437e89e72724d7c59d4f3c8d3dff7ed483ceea31262e6a7cfaa-merged.mount: Deactivated successfully.
Nov 25 09:31:21 compute-0 podman[74558]: 2025-11-25 09:31:21.540858071 +0000 UTC m=+0.467037575 container remove de83cfdc6a0145e3f9c4c4b1ea9fdb234abf51b3ae86cc3fe0cad961cdd11399 (image=quay.io/ceph/ceph:v19, name=crazy_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:31:21 compute-0 systemd[1]: libpod-conmon-de83cfdc6a0145e3f9c4c4b1ea9fdb234abf51b3ae86cc3fe0cad961cdd11399.scope: Deactivated successfully.
Nov 25 09:31:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1641497151' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 09:31:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:21.568+0000 7f0eebe73140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'progress'
Nov 25 09:31:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:21.635+0000 7f0eebe73140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'prometheus'
Nov 25 09:31:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:21.707+0000 7f0eebe73140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:31:21 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rbd_support'
Nov 25 09:31:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:21.995+0000 7f0eebe73140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:31:22 compute-0 ceph-mgr[74476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:31:22 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'restful'
Nov 25 09:31:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:22.076+0000 7f0eebe73140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:31:22 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rgw'
Nov 25 09:31:22 compute-0 ceph-mgr[74476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:31:22 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rook'
Nov 25 09:31:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:22.438+0000 7f0eebe73140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:31:22 compute-0 ceph-mgr[74476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:31:22 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'selftest'
Nov 25 09:31:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:22.892+0000 7f0eebe73140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:31:22 compute-0 ceph-mgr[74476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:31:22 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'snap_schedule'
Nov 25 09:31:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:22.949+0000 7f0eebe73140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'stats'
Nov 25 09:31:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:23.013+0000 7f0eebe73140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'status'
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'telegraf'
Nov 25 09:31:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:23.130+0000 7f0eebe73140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'telemetry'
Nov 25 09:31:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:23.189+0000 7f0eebe73140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 09:31:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:23.313+0000 7f0eebe73140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'volumes'
Nov 25 09:31:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:23.492+0000 7f0eebe73140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 podman[74607]: 2025-11-25 09:31:23.585025872 +0000 UTC m=+0.025374413 container create 0a5a1b805feaa5f9a9d0f31b631f4e9e12e0839737c31d7147cfe1254655ae3f (image=quay.io/ceph/ceph:v19, name=mystifying_einstein, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:23 compute-0 systemd[1]: Started libpod-conmon-0a5a1b805feaa5f9a9d0f31b631f4e9e12e0839737c31d7147cfe1254655ae3f.scope.
Nov 25 09:31:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ff8dfda99b780fbf7967dc9833be56a858368d539fd7b266adc9dfb6c38c20f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ff8dfda99b780fbf7967dc9833be56a858368d539fd7b266adc9dfb6c38c20f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ff8dfda99b780fbf7967dc9833be56a858368d539fd7b266adc9dfb6c38c20f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:23 compute-0 podman[74607]: 2025-11-25 09:31:23.632769593 +0000 UTC m=+0.073118144 container init 0a5a1b805feaa5f9a9d0f31b631f4e9e12e0839737c31d7147cfe1254655ae3f (image=quay.io/ceph/ceph:v19, name=mystifying_einstein, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 25 09:31:23 compute-0 podman[74607]: 2025-11-25 09:31:23.636846509 +0000 UTC m=+0.077195040 container start 0a5a1b805feaa5f9a9d0f31b631f4e9e12e0839737c31d7147cfe1254655ae3f (image=quay.io/ceph/ceph:v19, name=mystifying_einstein, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:23 compute-0 podman[74607]: 2025-11-25 09:31:23.638246928 +0000 UTC m=+0.078595480 container attach 0a5a1b805feaa5f9a9d0f31b631f4e9e12e0839737c31d7147cfe1254655ae3f (image=quay.io/ceph/ceph:v19, name=mystifying_einstein, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:23 compute-0 podman[74607]: 2025-11-25 09:31:23.574348645 +0000 UTC m=+0.014697196 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:23.725+0000 7f0eebe73140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'zabbix'
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629807675' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]: 
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]: {
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "health": {
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "status": "HEALTH_OK",
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "checks": {},
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "mutes": []
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     },
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "election_epoch": 5,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "quorum": [
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         0
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     ],
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "quorum_names": [
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "compute-0"
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     ],
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "quorum_age": 6,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "monmap": {
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "epoch": 1,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "min_mon_release_name": "squid",
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "num_mons": 1
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     },
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "osdmap": {
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "epoch": 1,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "num_osds": 0,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "num_up_osds": 0,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "osd_up_since": 0,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "num_in_osds": 0,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "osd_in_since": 0,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "num_remapped_pgs": 0
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     },
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "pgmap": {
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "pgs_by_state": [],
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "num_pgs": 0,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "num_pools": 0,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "num_objects": 0,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "data_bytes": 0,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "bytes_used": 0,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "bytes_avail": 0,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "bytes_total": 0
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     },
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "fsmap": {
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "epoch": 1,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "btime": "2025-11-25T09:31:16:071954+0000",
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "by_rank": [],
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "up:standby": 0
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     },
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "mgrmap": {
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "available": false,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "num_standbys": 0,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "modules": [
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:             "iostat",
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:             "nfs",
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:             "restful"
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         ],
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "services": {}
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     },
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "servicemap": {
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "epoch": 1,
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "modified": "2025-11-25T09:31:16.073226+0000",
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:         "services": {}
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     },
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]:     "progress_events": {}
Nov 25 09:31:23 compute-0 mystifying_einstein[74621]: }
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:23.788+0000 7f0eebe73140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:31:23 compute-0 systemd[1]: libpod-0a5a1b805feaa5f9a9d0f31b631f4e9e12e0839737c31d7147cfe1254655ae3f.scope: Deactivated successfully.
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: ms_deliver_dispatch: unhandled message 0x560fef3ae9c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 25 09:31:23 compute-0 podman[74607]: 2025-11-25 09:31:23.790837974 +0000 UTC m=+0.231186505 container died 0a5a1b805feaa5f9a9d0f31b631f4e9e12e0839737c31d7147cfe1254655ae3f (image=quay.io/ceph/ceph:v19, name=mystifying_einstein, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zcfgby
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.zcfgby(active, starting, since 0.00476331s)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr handle_mgr_map Activating!
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr handle_mgr_map I am now activating
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"} v 0)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ff8dfda99b780fbf7967dc9833be56a858368d539fd7b266adc9dfb6c38c20f-merged.mount: Deactivated successfully.
Nov 25 09:31:23 compute-0 podman[74607]: 2025-11-25 09:31:23.808302406 +0000 UTC m=+0.248650938 container remove 0a5a1b805feaa5f9a9d0f31b631f4e9e12e0839737c31d7147cfe1254655ae3f (image=quay.io/ceph/ceph:v19, name=mystifying_einstein, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Manager daemon compute-0.zcfgby is now available
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: balancer
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [balancer INFO root] Starting
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: crash
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:31:23
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [balancer INFO root] No pools available
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: devicehealth
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [devicehealth INFO root] Starting
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: iostat
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: nfs
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: orchestrator
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: pg_autoscaler
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: progress
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [progress INFO root] Loading...
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [progress INFO root] No stored events to load
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [progress INFO root] Loaded [] historic events
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 09:31:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/629807675' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 09:31:23 compute-0 ceph-mon[74207]: Activating manager daemon compute-0.zcfgby
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mgrmap e2: compute-0.zcfgby(active, starting, since 0.00476331s)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:31:23 compute-0 ceph-mon[74207]: from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:31:23 compute-0 ceph-mon[74207]: from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:31:23 compute-0 ceph-mon[74207]: from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:31:23 compute-0 ceph-mon[74207]: from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:31:23 compute-0 ceph-mon[74207]: Manager daemon compute-0.zcfgby is now available
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [rbd_support INFO root] recovery thread starting
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [rbd_support INFO root] starting setup
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: rbd_support
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: restful
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: status
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [restful WARNING root] server not running: no certificate configured
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: telemetry
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:23 compute-0 systemd[1]: libpod-conmon-0a5a1b805feaa5f9a9d0f31b631f4e9e12e0839737c31d7147cfe1254655ae3f.scope: Deactivated successfully.
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"} v 0)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [rbd_support INFO root] PerfHandler: starting
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TaskHandler: starting
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"} v 0)
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:31:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: [rbd_support INFO root] setup complete
Nov 25 09:31:23 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: volumes
Nov 25 09:31:24 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.zcfgby(active, since 1.00906s)
Nov 25 09:31:24 compute-0 ceph-mon[74207]: from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:24 compute-0 ceph-mon[74207]: from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:31:24 compute-0 ceph-mon[74207]: from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:24 compute-0 ceph-mon[74207]: from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:31:24 compute-0 ceph-mon[74207]: from='mgr.14102 192.168.122.100:0/3227626532' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:24 compute-0 ceph-mon[74207]: mgrmap e3: compute-0.zcfgby(active, since 1.00906s)
Nov 25 09:31:25 compute-0 ceph-mgr[74476]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 09:31:25 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.zcfgby(active, since 2s)
Nov 25 09:31:25 compute-0 podman[74736]: 2025-11-25 09:31:25.857055031 +0000 UTC m=+0.029903372 container create 4a670565ce9e278f49f536123e540f2b1dc2363ba17720453a6383960f2b3095 (image=quay.io/ceph/ceph:v19, name=exciting_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:25 compute-0 systemd[1]: Started libpod-conmon-4a670565ce9e278f49f536123e540f2b1dc2363ba17720453a6383960f2b3095.scope.
Nov 25 09:31:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5824d805d96f2c3037378935b2480851a93b413b0037e7fceb9f85959367a61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5824d805d96f2c3037378935b2480851a93b413b0037e7fceb9f85959367a61/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5824d805d96f2c3037378935b2480851a93b413b0037e7fceb9f85959367a61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:25 compute-0 podman[74736]: 2025-11-25 09:31:25.906194182 +0000 UTC m=+0.079042543 container init 4a670565ce9e278f49f536123e540f2b1dc2363ba17720453a6383960f2b3095 (image=quay.io/ceph/ceph:v19, name=exciting_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:31:25 compute-0 podman[74736]: 2025-11-25 09:31:25.909983786 +0000 UTC m=+0.082832126 container start 4a670565ce9e278f49f536123e540f2b1dc2363ba17720453a6383960f2b3095 (image=quay.io/ceph/ceph:v19, name=exciting_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:25 compute-0 podman[74736]: 2025-11-25 09:31:25.911184619 +0000 UTC m=+0.084032961 container attach 4a670565ce9e278f49f536123e540f2b1dc2363ba17720453a6383960f2b3095 (image=quay.io/ceph/ceph:v19, name=exciting_merkle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 25 09:31:25 compute-0 podman[74736]: 2025-11-25 09:31:25.846448717 +0000 UTC m=+0.019297068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 25 09:31:26 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561850109' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 09:31:26 compute-0 exciting_merkle[74749]: 
Nov 25 09:31:26 compute-0 exciting_merkle[74749]: {
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "health": {
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "status": "HEALTH_OK",
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "checks": {},
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "mutes": []
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     },
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "election_epoch": 5,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "quorum": [
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         0
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     ],
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "quorum_names": [
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "compute-0"
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     ],
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "quorum_age": 8,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "monmap": {
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "epoch": 1,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "min_mon_release_name": "squid",
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "num_mons": 1
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     },
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "osdmap": {
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "epoch": 1,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "num_osds": 0,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "num_up_osds": 0,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "osd_up_since": 0,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "num_in_osds": 0,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "osd_in_since": 0,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "num_remapped_pgs": 0
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     },
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "pgmap": {
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "pgs_by_state": [],
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "num_pgs": 0,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "num_pools": 0,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "num_objects": 0,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "data_bytes": 0,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "bytes_used": 0,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "bytes_avail": 0,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "bytes_total": 0
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     },
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "fsmap": {
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "epoch": 1,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "btime": "2025-11-25T09:31:16:071954+0000",
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "by_rank": [],
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "up:standby": 0
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     },
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "mgrmap": {
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "available": true,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "num_standbys": 0,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "modules": [
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:             "iostat",
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:             "nfs",
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:             "restful"
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         ],
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "services": {}
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     },
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "servicemap": {
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "epoch": 1,
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "modified": "2025-11-25T09:31:16.073226+0000",
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:         "services": {}
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     },
Nov 25 09:31:26 compute-0 exciting_merkle[74749]:     "progress_events": {}
Nov 25 09:31:26 compute-0 exciting_merkle[74749]: }
Nov 25 09:31:26 compute-0 systemd[1]: libpod-4a670565ce9e278f49f536123e540f2b1dc2363ba17720453a6383960f2b3095.scope: Deactivated successfully.
Nov 25 09:31:26 compute-0 podman[74775]: 2025-11-25 09:31:26.270475021 +0000 UTC m=+0.016214866 container died 4a670565ce9e278f49f536123e540f2b1dc2363ba17720453a6383960f2b3095 (image=quay.io/ceph/ceph:v19, name=exciting_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5824d805d96f2c3037378935b2480851a93b413b0037e7fceb9f85959367a61-merged.mount: Deactivated successfully.
Nov 25 09:31:26 compute-0 podman[74775]: 2025-11-25 09:31:26.28585851 +0000 UTC m=+0.031598356 container remove 4a670565ce9e278f49f536123e540f2b1dc2363ba17720453a6383960f2b3095 (image=quay.io/ceph/ceph:v19, name=exciting_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:26 compute-0 systemd[1]: libpod-conmon-4a670565ce9e278f49f536123e540f2b1dc2363ba17720453a6383960f2b3095.scope: Deactivated successfully.
Nov 25 09:31:26 compute-0 podman[74787]: 2025-11-25 09:31:26.327730361 +0000 UTC m=+0.025542621 container create 71f3c76eb096871e44426a1837b8d05089cc176c3433869dd5e186a7518249d1 (image=quay.io/ceph/ceph:v19, name=kind_ishizaka, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:31:26 compute-0 systemd[1]: Started libpod-conmon-71f3c76eb096871e44426a1837b8d05089cc176c3433869dd5e186a7518249d1.scope.
Nov 25 09:31:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba3902d75ef476de24fbe277a537acf97b3bb67fa8983eaba970c4b7123fc74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba3902d75ef476de24fbe277a537acf97b3bb67fa8983eaba970c4b7123fc74/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba3902d75ef476de24fbe277a537acf97b3bb67fa8983eaba970c4b7123fc74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba3902d75ef476de24fbe277a537acf97b3bb67fa8983eaba970c4b7123fc74/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:26 compute-0 podman[74787]: 2025-11-25 09:31:26.385553352 +0000 UTC m=+0.083365631 container init 71f3c76eb096871e44426a1837b8d05089cc176c3433869dd5e186a7518249d1 (image=quay.io/ceph/ceph:v19, name=kind_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:31:26 compute-0 podman[74787]: 2025-11-25 09:31:26.389235904 +0000 UTC m=+0.087048164 container start 71f3c76eb096871e44426a1837b8d05089cc176c3433869dd5e186a7518249d1 (image=quay.io/ceph/ceph:v19, name=kind_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:26 compute-0 podman[74787]: 2025-11-25 09:31:26.390371185 +0000 UTC m=+0.088183454 container attach 71f3c76eb096871e44426a1837b8d05089cc176c3433869dd5e186a7518249d1 (image=quay.io/ceph/ceph:v19, name=kind_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:26 compute-0 podman[74787]: 2025-11-25 09:31:26.317320758 +0000 UTC m=+0.015133038 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Nov 25 09:31:26 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3391656080' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 09:31:26 compute-0 kind_ishizaka[74800]: 
Nov 25 09:31:26 compute-0 kind_ishizaka[74800]: [global]
Nov 25 09:31:26 compute-0 kind_ishizaka[74800]:         fsid = af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:26 compute-0 kind_ishizaka[74800]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 25 09:31:26 compute-0 systemd[1]: libpod-71f3c76eb096871e44426a1837b8d05089cc176c3433869dd5e186a7518249d1.scope: Deactivated successfully.
Nov 25 09:31:26 compute-0 podman[74826]: 2025-11-25 09:31:26.677402512 +0000 UTC m=+0.015906015 container died 71f3c76eb096871e44426a1837b8d05089cc176c3433869dd5e186a7518249d1 (image=quay.io/ceph/ceph:v19, name=kind_ishizaka, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-dba3902d75ef476de24fbe277a537acf97b3bb67fa8983eaba970c4b7123fc74-merged.mount: Deactivated successfully.
Nov 25 09:31:26 compute-0 podman[74826]: 2025-11-25 09:31:26.694099587 +0000 UTC m=+0.032603080 container remove 71f3c76eb096871e44426a1837b8d05089cc176c3433869dd5e186a7518249d1 (image=quay.io/ceph/ceph:v19, name=kind_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:26 compute-0 systemd[1]: libpod-conmon-71f3c76eb096871e44426a1837b8d05089cc176c3433869dd5e186a7518249d1.scope: Deactivated successfully.
Nov 25 09:31:26 compute-0 podman[74838]: 2025-11-25 09:31:26.737182883 +0000 UTC m=+0.026601748 container create cd30beb3171fb59428621a952bfa44aa7cb163e318e7d58590760469c19ba718 (image=quay.io/ceph/ceph:v19, name=pensive_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:31:26 compute-0 systemd[1]: Started libpod-conmon-cd30beb3171fb59428621a952bfa44aa7cb163e318e7d58590760469c19ba718.scope.
Nov 25 09:31:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09418c960df757b6ee239613b772b8034fe96bd2c0c72b06c44f9a8a187a1bb6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09418c960df757b6ee239613b772b8034fe96bd2c0c72b06c44f9a8a187a1bb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09418c960df757b6ee239613b772b8034fe96bd2c0c72b06c44f9a8a187a1bb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:26 compute-0 podman[74838]: 2025-11-25 09:31:26.788408276 +0000 UTC m=+0.077827142 container init cd30beb3171fb59428621a952bfa44aa7cb163e318e7d58590760469c19ba718 (image=quay.io/ceph/ceph:v19, name=pensive_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:31:26 compute-0 podman[74838]: 2025-11-25 09:31:26.79218715 +0000 UTC m=+0.081606015 container start cd30beb3171fb59428621a952bfa44aa7cb163e318e7d58590760469c19ba718 (image=quay.io/ceph/ceph:v19, name=pensive_kalam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:26 compute-0 podman[74838]: 2025-11-25 09:31:26.793361434 +0000 UTC m=+0.082780309 container attach cd30beb3171fb59428621a952bfa44aa7cb163e318e7d58590760469c19ba718 (image=quay.io/ceph/ceph:v19, name=pensive_kalam, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:26 compute-0 podman[74838]: 2025-11-25 09:31:26.726185371 +0000 UTC m=+0.015604256 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:26 compute-0 ceph-mon[74207]: mgrmap e4: compute-0.zcfgby(active, since 2s)
Nov 25 09:31:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2561850109' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 09:31:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3391656080' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 09:31:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Nov 25 09:31:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3198231572' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 25 09:31:27 compute-0 ceph-mgr[74476]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 09:31:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3198231572' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 25 09:31:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3198231572' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 25 09:31:27 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.zcfgby(active, since 4s)
Nov 25 09:31:27 compute-0 systemd[1]: libpod-cd30beb3171fb59428621a952bfa44aa7cb163e318e7d58590760469c19ba718.scope: Deactivated successfully.
Nov 25 09:31:27 compute-0 podman[74877]: 2025-11-25 09:31:27.890918177 +0000 UTC m=+0.015745392 container died cd30beb3171fb59428621a952bfa44aa7cb163e318e7d58590760469c19ba718 (image=quay.io/ceph/ceph:v19, name=pensive_kalam, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-09418c960df757b6ee239613b772b8034fe96bd2c0c72b06c44f9a8a187a1bb6-merged.mount: Deactivated successfully.
Nov 25 09:31:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ignoring --setuser ceph since I am not root
Nov 25 09:31:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ignoring --setgroup ceph since I am not root
Nov 25 09:31:27 compute-0 podman[74877]: 2025-11-25 09:31:27.907144865 +0000 UTC m=+0.031972051 container remove cd30beb3171fb59428621a952bfa44aa7cb163e318e7d58590760469c19ba718 (image=quay.io/ceph/ceph:v19, name=pensive_kalam, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:31:27 compute-0 ceph-mgr[74476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 09:31:27 compute-0 ceph-mgr[74476]: pidfile_write: ignore empty --pid-file
Nov 25 09:31:27 compute-0 systemd[1]: libpod-conmon-cd30beb3171fb59428621a952bfa44aa7cb163e318e7d58590760469c19ba718.scope: Deactivated successfully.
Nov 25 09:31:27 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'alerts'
Nov 25 09:31:27 compute-0 podman[74909]: 2025-11-25 09:31:27.953996375 +0000 UTC m=+0.026960384 container create 6adcdbb6f10de305788d86ad30438cc64054cabc0c138b0ef7ee87812792c6bf (image=quay.io/ceph/ceph:v19, name=elegant_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 25 09:31:27 compute-0 systemd[1]: Started libpod-conmon-6adcdbb6f10de305788d86ad30438cc64054cabc0c138b0ef7ee87812792c6bf.scope.
Nov 25 09:31:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444258c742430b011a55d7f253daf41baa7125fe9f8874e1e64e095b217c6696/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444258c742430b011a55d7f253daf41baa7125fe9f8874e1e64e095b217c6696/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444258c742430b011a55d7f253daf41baa7125fe9f8874e1e64e095b217c6696/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:28 compute-0 podman[74909]: 2025-11-25 09:31:28.005139293 +0000 UTC m=+0.078103313 container init 6adcdbb6f10de305788d86ad30438cc64054cabc0c138b0ef7ee87812792c6bf (image=quay.io/ceph/ceph:v19, name=elegant_kare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 25 09:31:28 compute-0 podman[74909]: 2025-11-25 09:31:28.009231197 +0000 UTC m=+0.082195206 container start 6adcdbb6f10de305788d86ad30438cc64054cabc0c138b0ef7ee87812792c6bf (image=quay.io/ceph/ceph:v19, name=elegant_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:28 compute-0 podman[74909]: 2025-11-25 09:31:28.010402135 +0000 UTC m=+0.083366144 container attach 6adcdbb6f10de305788d86ad30438cc64054cabc0c138b0ef7ee87812792c6bf (image=quay.io/ceph/ceph:v19, name=elegant_kare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Nov 25 09:31:28 compute-0 ceph-mgr[74476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:31:28 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'balancer'
Nov 25 09:31:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:28.018+0000 7fdb0852b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:31:28 compute-0 podman[74909]: 2025-11-25 09:31:27.942658071 +0000 UTC m=+0.015622100 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:28 compute-0 ceph-mgr[74476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:31:28 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'cephadm'
Nov 25 09:31:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:28.088+0000 7fdb0852b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:31:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 25 09:31:28 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2332041081' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 09:31:28 compute-0 elegant_kare[74924]: {
Nov 25 09:31:28 compute-0 elegant_kare[74924]:     "epoch": 5,
Nov 25 09:31:28 compute-0 elegant_kare[74924]:     "available": true,
Nov 25 09:31:28 compute-0 elegant_kare[74924]:     "active_name": "compute-0.zcfgby",
Nov 25 09:31:28 compute-0 elegant_kare[74924]:     "num_standby": 0
Nov 25 09:31:28 compute-0 elegant_kare[74924]: }
Nov 25 09:31:28 compute-0 systemd[1]: libpod-6adcdbb6f10de305788d86ad30438cc64054cabc0c138b0ef7ee87812792c6bf.scope: Deactivated successfully.
Nov 25 09:31:28 compute-0 podman[74950]: 2025-11-25 09:31:28.330628587 +0000 UTC m=+0.015419167 container died 6adcdbb6f10de305788d86ad30438cc64054cabc0c138b0ef7ee87812792c6bf (image=quay.io/ceph/ceph:v19, name=elegant_kare, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-444258c742430b011a55d7f253daf41baa7125fe9f8874e1e64e095b217c6696-merged.mount: Deactivated successfully.
Nov 25 09:31:28 compute-0 podman[74950]: 2025-11-25 09:31:28.34848052 +0000 UTC m=+0.033271100 container remove 6adcdbb6f10de305788d86ad30438cc64054cabc0c138b0ef7ee87812792c6bf (image=quay.io/ceph/ceph:v19, name=elegant_kare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:31:28 compute-0 systemd[1]: libpod-conmon-6adcdbb6f10de305788d86ad30438cc64054cabc0c138b0ef7ee87812792c6bf.scope: Deactivated successfully.
Nov 25 09:31:28 compute-0 podman[74962]: 2025-11-25 09:31:28.395685475 +0000 UTC m=+0.029810897 container create 2b8f22dade1a78c222f37bcdefab91a84929f33762e3efe0d611eca345c811bd (image=quay.io/ceph/ceph:v19, name=keen_faraday, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:31:28 compute-0 systemd[1]: Started libpod-conmon-2b8f22dade1a78c222f37bcdefab91a84929f33762e3efe0d611eca345c811bd.scope.
Nov 25 09:31:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd83c3f4edd000a41713d12a338f902c8c6a221b8d0ba5f500e866a2258f1b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd83c3f4edd000a41713d12a338f902c8c6a221b8d0ba5f500e866a2258f1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd83c3f4edd000a41713d12a338f902c8c6a221b8d0ba5f500e866a2258f1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:28 compute-0 podman[74962]: 2025-11-25 09:31:28.454526044 +0000 UTC m=+0.088651476 container init 2b8f22dade1a78c222f37bcdefab91a84929f33762e3efe0d611eca345c811bd (image=quay.io/ceph/ceph:v19, name=keen_faraday, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:28 compute-0 podman[74962]: 2025-11-25 09:31:28.458178679 +0000 UTC m=+0.092304101 container start 2b8f22dade1a78c222f37bcdefab91a84929f33762e3efe0d611eca345c811bd (image=quay.io/ceph/ceph:v19, name=keen_faraday, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 25 09:31:28 compute-0 podman[74962]: 2025-11-25 09:31:28.459326112 +0000 UTC m=+0.093451535 container attach 2b8f22dade1a78c222f37bcdefab91a84929f33762e3efe0d611eca345c811bd (image=quay.io/ceph/ceph:v19, name=keen_faraday, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:31:28 compute-0 podman[74962]: 2025-11-25 09:31:28.381994867 +0000 UTC m=+0.016120309 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:28 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'crash'
Nov 25 09:31:28 compute-0 ceph-mgr[74476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:31:28 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'dashboard'
Nov 25 09:31:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:28.747+0000 7fdb0852b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:31:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3198231572' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 25 09:31:28 compute-0 ceph-mon[74207]: mgrmap e5: compute-0.zcfgby(active, since 4s)
Nov 25 09:31:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2332041081' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 09:31:29 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'devicehealth'
Nov 25 09:31:29 compute-0 ceph-mgr[74476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:31:29 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 09:31:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:29.247+0000 7fdb0852b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:31:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 09:31:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 09:31:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   from numpy import show_config as show_numpy_config
Nov 25 09:31:29 compute-0 ceph-mgr[74476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:31:29 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'influx'
Nov 25 09:31:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:29.381+0000 7fdb0852b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:31:29 compute-0 ceph-mgr[74476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:31:29 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'insights'
Nov 25 09:31:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:29.439+0000 7fdb0852b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:31:29 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'iostat'
Nov 25 09:31:29 compute-0 ceph-mgr[74476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:31:29 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'k8sevents'
Nov 25 09:31:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:29.550+0000 7fdb0852b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:31:29 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'localpool'
Nov 25 09:31:29 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'mirroring'
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'nfs'
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'orchestrator'
Nov 25 09:31:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:30.341+0000 7fdb0852b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 09:31:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:30.525+0000 7fdb0852b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'osd_support'
Nov 25 09:31:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:30.587+0000 7fdb0852b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:31:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:30.640+0000 7fdb0852b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:31:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:30.704+0000 7fdb0852b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'progress'
Nov 25 09:31:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:30.760+0000 7fdb0852b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:31:30 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'prometheus'
Nov 25 09:31:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:31.046+0000 7fdb0852b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:31:31 compute-0 ceph-mgr[74476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:31:31 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rbd_support'
Nov 25 09:31:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:31.126+0000 7fdb0852b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:31:31 compute-0 ceph-mgr[74476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:31:31 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'restful'
Nov 25 09:31:31 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rgw'
Nov 25 09:31:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:31.490+0000 7fdb0852b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:31:31 compute-0 ceph-mgr[74476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:31:31 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rook'
Nov 25 09:31:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:31.943+0000 7fdb0852b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:31:31 compute-0 ceph-mgr[74476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:31:31 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'selftest'
Nov 25 09:31:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:32.000+0000 7fdb0852b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'snap_schedule'
Nov 25 09:31:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:32.064+0000 7fdb0852b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'stats'
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'status'
Nov 25 09:31:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:32.182+0000 7fdb0852b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'telegraf'
Nov 25 09:31:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:32.238+0000 7fdb0852b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'telemetry'
Nov 25 09:31:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:32.364+0000 7fdb0852b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 09:31:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:32.545+0000 7fdb0852b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'volumes'
Nov 25 09:31:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:32.759+0000 7fdb0852b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'zabbix'
Nov 25 09:31:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:31:32.818+0000 7fdb0852b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Active manager daemon compute-0.zcfgby restarted
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zcfgby
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: ms_deliver_dispatch: unhandled message 0x559d0616ad00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr handle_mgr_map Activating!
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr handle_mgr_map I am now activating
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.zcfgby(active, starting, since 0.00528203s)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"} v 0)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: balancer
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [balancer INFO root] Starting
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:31:32
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [balancer INFO root] No pools available
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Manager daemon compute-0.zcfgby is now available
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: cephadm
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: crash
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: devicehealth
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: iostat
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: nfs
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [devicehealth INFO root] Starting
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: orchestrator
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: pg_autoscaler
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: progress
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [progress INFO root] Loading...
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [progress INFO root] No stored events to load
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [progress INFO root] Loaded [] historic events
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [rbd_support INFO root] recovery thread starting
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [rbd_support INFO root] starting setup
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: rbd_support
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: restful
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"} v 0)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: status
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [restful WARNING root] server not running: no certificate configured
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: telemetry
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 09:31:32 compute-0 ceph-mon[74207]: Active manager daemon compute-0.zcfgby restarted
Nov 25 09:31:32 compute-0 ceph-mon[74207]: Activating manager daemon compute-0.zcfgby
Nov 25 09:31:32 compute-0 ceph-mon[74207]: osdmap e2: 0 total, 0 up, 0 in
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mgrmap e6: compute-0.zcfgby(active, starting, since 0.00528203s)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mon[74207]: Manager daemon compute-0.zcfgby is now available
Nov 25 09:31:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [rbd_support INFO root] PerfHandler: starting
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TaskHandler: starting
Nov 25 09:31:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"} v 0)
Nov 25 09:31:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: [rbd_support INFO root] setup complete
Nov 25 09:31:32 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: volumes
Nov 25 09:31:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Nov 25 09:31:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Nov 25 09:31:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:33 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 25 09:31:33 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.zcfgby(active, since 1.00828s)
Nov 25 09:31:33 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 25 09:31:33 compute-0 keen_faraday[74986]: {
Nov 25 09:31:33 compute-0 keen_faraday[74986]:     "mgrmap_epoch": 7,
Nov 25 09:31:33 compute-0 keen_faraday[74986]:     "initialized": true
Nov 25 09:31:33 compute-0 keen_faraday[74986]: }
Nov 25 09:31:33 compute-0 systemd[1]: libpod-2b8f22dade1a78c222f37bcdefab91a84929f33762e3efe0d611eca345c811bd.scope: Deactivated successfully.
Nov 25 09:31:33 compute-0 podman[74962]: 2025-11-25 09:31:33.849197346 +0000 UTC m=+5.483322768 container died 2b8f22dade1a78c222f37bcdefab91a84929f33762e3efe0d611eca345c811bd (image=quay.io/ceph/ceph:v19, name=keen_faraday, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-8edd83c3f4edd000a41713d12a338f902c8c6a221b8d0ba5f500e866a2258f1b-merged.mount: Deactivated successfully.
Nov 25 09:31:33 compute-0 podman[74962]: 2025-11-25 09:31:33.868637902 +0000 UTC m=+5.502763325 container remove 2b8f22dade1a78c222f37bcdefab91a84929f33762e3efe0d611eca345c811bd (image=quay.io/ceph/ceph:v19, name=keen_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 09:31:33 compute-0 systemd[1]: libpod-conmon-2b8f22dade1a78c222f37bcdefab91a84929f33762e3efe0d611eca345c811bd.scope: Deactivated successfully.
Nov 25 09:31:33 compute-0 ceph-mon[74207]: Found migration_current of "None". Setting to last migration.
Nov 25 09:31:33 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:31:33 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:33 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:33 compute-0 ceph-mon[74207]: mgrmap e7: compute-0.zcfgby(active, since 1.00828s)
Nov 25 09:31:33 compute-0 podman[75133]: 2025-11-25 09:31:33.911224351 +0000 UTC m=+0.026671228 container create 0063a60c5bf47d16ec28fe690cca58a479d04143ca8c7ff89a7c85686f8cd8b1 (image=quay.io/ceph/ceph:v19, name=hopeful_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 09:31:33 compute-0 systemd[1]: Started libpod-conmon-0063a60c5bf47d16ec28fe690cca58a479d04143ca8c7ff89a7c85686f8cd8b1.scope.
Nov 25 09:31:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ccd5320b66d267224cf1cc81ca84c732b5287e4083b45656756fb069522593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ccd5320b66d267224cf1cc81ca84c732b5287e4083b45656756fb069522593/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ccd5320b66d267224cf1cc81ca84c732b5287e4083b45656756fb069522593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:33 compute-0 podman[75133]: 2025-11-25 09:31:33.971104691 +0000 UTC m=+0.086551578 container init 0063a60c5bf47d16ec28fe690cca58a479d04143ca8c7ff89a7c85686f8cd8b1 (image=quay.io/ceph/ceph:v19, name=hopeful_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:33 compute-0 podman[75133]: 2025-11-25 09:31:33.975303787 +0000 UTC m=+0.090750663 container start 0063a60c5bf47d16ec28fe690cca58a479d04143ca8c7ff89a7c85686f8cd8b1 (image=quay.io/ceph/ceph:v19, name=hopeful_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:33 compute-0 podman[75133]: 2025-11-25 09:31:33.976226245 +0000 UTC m=+0.091673122 container attach 0063a60c5bf47d16ec28fe690cca58a479d04143ca8c7ff89a7c85686f8cd8b1 (image=quay.io/ceph/ceph:v19, name=hopeful_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:33 compute-0 podman[75133]: 2025-11-25 09:31:33.900226419 +0000 UTC m=+0.015673315 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Nov 25 09:31:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 25 09:31:34 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:31:34 compute-0 systemd[1]: libpod-0063a60c5bf47d16ec28fe690cca58a479d04143ca8c7ff89a7c85686f8cd8b1.scope: Deactivated successfully.
Nov 25 09:31:34 compute-0 podman[75133]: 2025-11-25 09:31:34.252785351 +0000 UTC m=+0.368232238 container died 0063a60c5bf47d16ec28fe690cca58a479d04143ca8c7ff89a7c85686f8cd8b1 (image=quay.io/ceph/ceph:v19, name=hopeful_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-78ccd5320b66d267224cf1cc81ca84c732b5287e4083b45656756fb069522593-merged.mount: Deactivated successfully.
Nov 25 09:31:34 compute-0 podman[75133]: 2025-11-25 09:31:34.269240911 +0000 UTC m=+0.384687788 container remove 0063a60c5bf47d16ec28fe690cca58a479d04143ca8c7ff89a7c85686f8cd8b1 (image=quay.io/ceph/ceph:v19, name=hopeful_nightingale, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:34 compute-0 systemd[1]: libpod-conmon-0063a60c5bf47d16ec28fe690cca58a479d04143ca8c7ff89a7c85686f8cd8b1.scope: Deactivated successfully.
Nov 25 09:31:34 compute-0 podman[75180]: 2025-11-25 09:31:34.315606274 +0000 UTC m=+0.031358655 container create 50447437d16ce93f45c27191eeb81845e7e080e5bdec8bfb6088f8e7d25ca6f9 (image=quay.io/ceph/ceph:v19, name=compassionate_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:34 compute-0 systemd[1]: Started libpod-conmon-50447437d16ce93f45c27191eeb81845e7e080e5bdec8bfb6088f8e7d25ca6f9.scope.
Nov 25 09:31:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91617d68ef812470e2ee58e790dbdd64f3daa1f80160a060556bcca8b3b6b1c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91617d68ef812470e2ee58e790dbdd64f3daa1f80160a060556bcca8b3b6b1c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91617d68ef812470e2ee58e790dbdd64f3daa1f80160a060556bcca8b3b6b1c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:34 compute-0 podman[75180]: 2025-11-25 09:31:34.375643399 +0000 UTC m=+0.091395779 container init 50447437d16ce93f45c27191eeb81845e7e080e5bdec8bfb6088f8e7d25ca6f9 (image=quay.io/ceph/ceph:v19, name=compassionate_khayyam, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:31:34 compute-0 podman[75180]: 2025-11-25 09:31:34.379529594 +0000 UTC m=+0.095281975 container start 50447437d16ce93f45c27191eeb81845e7e080e5bdec8bfb6088f8e7d25ca6f9 (image=quay.io/ceph/ceph:v19, name=compassionate_khayyam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:34 compute-0 podman[75180]: 2025-11-25 09:31:34.38070975 +0000 UTC m=+0.096462120 container attach 50447437d16ce93f45c27191eeb81845e7e080e5bdec8bfb6088f8e7d25ca6f9 (image=quay.io/ceph/ceph:v19, name=compassionate_khayyam, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:31:34] ENGINE Bus STARTING
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:31:34] ENGINE Bus STARTING
Nov 25 09:31:34 compute-0 podman[75180]: 2025-11-25 09:31:34.302996935 +0000 UTC m=+0.018749325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:31:34] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:31:34] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:31:34] ENGINE Client ('192.168.122.100', 57068) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:31:34] ENGINE Client ('192.168.122.100', 57068) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:31:34] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:31:34] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:31:34] ENGINE Bus STARTED
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:31:34] ENGINE Bus STARTED
Nov 25 09:31:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 25 09:31:34 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Nov 25 09:31:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: [cephadm INFO root] Set ssh ssh_user
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 25 09:31:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Nov 25 09:31:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: [cephadm INFO root] Set ssh ssh_config
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 25 09:31:34 compute-0 compassionate_khayyam[75197]: ssh user set to ceph-admin. sudo will be used
Nov 25 09:31:34 compute-0 systemd[1]: libpod-50447437d16ce93f45c27191eeb81845e7e080e5bdec8bfb6088f8e7d25ca6f9.scope: Deactivated successfully.
Nov 25 09:31:34 compute-0 conmon[75197]: conmon 50447437d16ce93f45c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50447437d16ce93f45c27191eeb81845e7e080e5bdec8bfb6088f8e7d25ca6f9.scope/container/memory.events
Nov 25 09:31:34 compute-0 podman[75180]: 2025-11-25 09:31:34.655052337 +0000 UTC m=+0.370804718 container died 50447437d16ce93f45c27191eeb81845e7e080e5bdec8bfb6088f8e7d25ca6f9 (image=quay.io/ceph/ceph:v19, name=compassionate_khayyam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Nov 25 09:31:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-91617d68ef812470e2ee58e790dbdd64f3daa1f80160a060556bcca8b3b6b1c1-merged.mount: Deactivated successfully.
Nov 25 09:31:34 compute-0 podman[75180]: 2025-11-25 09:31:34.670584847 +0000 UTC m=+0.386337228 container remove 50447437d16ce93f45c27191eeb81845e7e080e5bdec8bfb6088f8e7d25ca6f9 (image=quay.io/ceph/ceph:v19, name=compassionate_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:31:34 compute-0 systemd[1]: libpod-conmon-50447437d16ce93f45c27191eeb81845e7e080e5bdec8bfb6088f8e7d25ca6f9.scope: Deactivated successfully.
Nov 25 09:31:34 compute-0 podman[75253]: 2025-11-25 09:31:34.709736451 +0000 UTC m=+0.026147130 container create 32c51e83c2205f784d91a8fb0c889a91e7af6c972179fd2de9541f1e16d009b6 (image=quay.io/ceph/ceph:v19, name=suspicious_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:34 compute-0 systemd[1]: Started libpod-conmon-32c51e83c2205f784d91a8fb0c889a91e7af6c972179fd2de9541f1e16d009b6.scope.
Nov 25 09:31:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7ddff1da8ea0d4a6ce27cf112eaddf9168b19d6eeb5b7ec49249b640a499b1d/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7ddff1da8ea0d4a6ce27cf112eaddf9168b19d6eeb5b7ec49249b640a499b1d/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7ddff1da8ea0d4a6ce27cf112eaddf9168b19d6eeb5b7ec49249b640a499b1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7ddff1da8ea0d4a6ce27cf112eaddf9168b19d6eeb5b7ec49249b640a499b1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7ddff1da8ea0d4a6ce27cf112eaddf9168b19d6eeb5b7ec49249b640a499b1d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:34 compute-0 podman[75253]: 2025-11-25 09:31:34.75859786 +0000 UTC m=+0.075008549 container init 32c51e83c2205f784d91a8fb0c889a91e7af6c972179fd2de9541f1e16d009b6 (image=quay.io/ceph/ceph:v19, name=suspicious_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:34 compute-0 podman[75253]: 2025-11-25 09:31:34.762700954 +0000 UTC m=+0.079111633 container start 32c51e83c2205f784d91a8fb0c889a91e7af6c972179fd2de9541f1e16d009b6 (image=quay.io/ceph/ceph:v19, name=suspicious_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 25 09:31:34 compute-0 podman[75253]: 2025-11-25 09:31:34.765180939 +0000 UTC m=+0.081591618 container attach 32c51e83c2205f784d91a8fb0c889a91e7af6c972179fd2de9541f1e16d009b6 (image=quay.io/ceph/ceph:v19, name=suspicious_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:31:34 compute-0 podman[75253]: 2025-11-25 09:31:34.699170644 +0000 UTC m=+0.015581333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:34 compute-0 ceph-mgr[74476]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 09:31:34 compute-0 ceph-mon[74207]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 25 09:31:34 compute-0 ceph-mon[74207]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 25 09:31:34 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:34 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:31:34 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:31:34 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:34 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:35 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Nov 25 09:31:35 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:35 compute-0 ceph-mgr[74476]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 25 09:31:35 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 25 09:31:35 compute-0 ceph-mgr[74476]: [cephadm INFO root] Set ssh private key
Nov 25 09:31:35 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 25 09:31:35 compute-0 systemd[1]: libpod-32c51e83c2205f784d91a8fb0c889a91e7af6c972179fd2de9541f1e16d009b6.scope: Deactivated successfully.
Nov 25 09:31:35 compute-0 podman[75253]: 2025-11-25 09:31:35.031146946 +0000 UTC m=+0.347557625 container died 32c51e83c2205f784d91a8fb0c889a91e7af6c972179fd2de9541f1e16d009b6 (image=quay.io/ceph/ceph:v19, name=suspicious_wozniak, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7ddff1da8ea0d4a6ce27cf112eaddf9168b19d6eeb5b7ec49249b640a499b1d-merged.mount: Deactivated successfully.
Nov 25 09:31:35 compute-0 podman[75253]: 2025-11-25 09:31:35.047281561 +0000 UTC m=+0.363692240 container remove 32c51e83c2205f784d91a8fb0c889a91e7af6c972179fd2de9541f1e16d009b6 (image=quay.io/ceph/ceph:v19, name=suspicious_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:35 compute-0 systemd[1]: libpod-conmon-32c51e83c2205f784d91a8fb0c889a91e7af6c972179fd2de9541f1e16d009b6.scope: Deactivated successfully.
Nov 25 09:31:35 compute-0 podman[75304]: 2025-11-25 09:31:35.086481597 +0000 UTC m=+0.026443830 container create fa034fc3dd8a187c63feb792d3d7eeac7542c0cafca4a632d5d28c07ed7662c1 (image=quay.io/ceph/ceph:v19, name=modest_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:31:35 compute-0 systemd[1]: Started libpod-conmon-fa034fc3dd8a187c63feb792d3d7eeac7542c0cafca4a632d5d28c07ed7662c1.scope.
Nov 25 09:31:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc763a5b7c9609e7481caa50fba180de618c65a65d2a0dcb8892914c80b37d2/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc763a5b7c9609e7481caa50fba180de618c65a65d2a0dcb8892914c80b37d2/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc763a5b7c9609e7481caa50fba180de618c65a65d2a0dcb8892914c80b37d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc763a5b7c9609e7481caa50fba180de618c65a65d2a0dcb8892914c80b37d2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc763a5b7c9609e7481caa50fba180de618c65a65d2a0dcb8892914c80b37d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:35 compute-0 podman[75304]: 2025-11-25 09:31:35.135730734 +0000 UTC m=+0.075692968 container init fa034fc3dd8a187c63feb792d3d7eeac7542c0cafca4a632d5d28c07ed7662c1 (image=quay.io/ceph/ceph:v19, name=modest_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:31:35 compute-0 podman[75304]: 2025-11-25 09:31:35.139936434 +0000 UTC m=+0.079898657 container start fa034fc3dd8a187c63feb792d3d7eeac7542c0cafca4a632d5d28c07ed7662c1 (image=quay.io/ceph/ceph:v19, name=modest_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:31:35 compute-0 podman[75304]: 2025-11-25 09:31:35.141136526 +0000 UTC m=+0.081098799 container attach fa034fc3dd8a187c63feb792d3d7eeac7542c0cafca4a632d5d28c07ed7662c1 (image=quay.io/ceph/ceph:v19, name=modest_hopper, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:31:35 compute-0 podman[75304]: 2025-11-25 09:31:35.07684947 +0000 UTC m=+0.016811723 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:35 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Nov 25 09:31:35 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:35 compute-0 ceph-mgr[74476]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 25 09:31:35 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 25 09:31:35 compute-0 systemd[1]: libpod-fa034fc3dd8a187c63feb792d3d7eeac7542c0cafca4a632d5d28c07ed7662c1.scope: Deactivated successfully.
Nov 25 09:31:35 compute-0 podman[75304]: 2025-11-25 09:31:35.406167971 +0000 UTC m=+0.346130214 container died fa034fc3dd8a187c63feb792d3d7eeac7542c0cafca4a632d5d28c07ed7662c1 (image=quay.io/ceph/ceph:v19, name=modest_hopper, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dc763a5b7c9609e7481caa50fba180de618c65a65d2a0dcb8892914c80b37d2-merged.mount: Deactivated successfully.
Nov 25 09:31:35 compute-0 podman[75304]: 2025-11-25 09:31:35.424751814 +0000 UTC m=+0.364714046 container remove fa034fc3dd8a187c63feb792d3d7eeac7542c0cafca4a632d5d28c07ed7662c1 (image=quay.io/ceph/ceph:v19, name=modest_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:35 compute-0 systemd[1]: libpod-conmon-fa034fc3dd8a187c63feb792d3d7eeac7542c0cafca4a632d5d28c07ed7662c1.scope: Deactivated successfully.
Nov 25 09:31:35 compute-0 podman[75355]: 2025-11-25 09:31:35.464165792 +0000 UTC m=+0.026467554 container create 03bbd3f7eada42f8158dd2042d38bcd56c844be08389a5941416edaf7e323e03 (image=quay.io/ceph/ceph:v19, name=happy_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 25 09:31:35 compute-0 systemd[1]: Started libpod-conmon-03bbd3f7eada42f8158dd2042d38bcd56c844be08389a5941416edaf7e323e03.scope.
Nov 25 09:31:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3231d98c290f3fec1a066e2307b120c3f5a5ea93ef9b47c681153142a80723/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3231d98c290f3fec1a066e2307b120c3f5a5ea93ef9b47c681153142a80723/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3231d98c290f3fec1a066e2307b120c3f5a5ea93ef9b47c681153142a80723/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:35 compute-0 podman[75355]: 2025-11-25 09:31:35.497723591 +0000 UTC m=+0.060025344 container init 03bbd3f7eada42f8158dd2042d38bcd56c844be08389a5941416edaf7e323e03 (image=quay.io/ceph/ceph:v19, name=happy_mendeleev, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:35 compute-0 podman[75355]: 2025-11-25 09:31:35.502193466 +0000 UTC m=+0.064495219 container start 03bbd3f7eada42f8158dd2042d38bcd56c844be08389a5941416edaf7e323e03 (image=quay.io/ceph/ceph:v19, name=happy_mendeleev, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:35 compute-0 podman[75355]: 2025-11-25 09:31:35.503151062 +0000 UTC m=+0.065452815 container attach 03bbd3f7eada42f8158dd2042d38bcd56c844be08389a5941416edaf7e323e03 (image=quay.io/ceph/ceph:v19, name=happy_mendeleev, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:31:35 compute-0 podman[75355]: 2025-11-25 09:31:35.453949253 +0000 UTC m=+0.016251015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:35 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.zcfgby(active, since 2s)
Nov 25 09:31:35 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:35 compute-0 happy_mendeleev[75368]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtP42ulGBHh4dYS0Ie/i+PrcEnfDKUN8eO7PcEtAqcnt9ATk1g9V1wyVWcRFhELXCXWPVU+dod8s68a0qJwHpAgbrSX5jUFUUNXXRmXF+g4bRoYNkvSRf6tUPOCLPWK0XXbws6HHGQRbsvU9ZHztU0evdAHtyIzcKWxm4DuJRNG8I5ocURlW5jtuG/lETQ8QyswxGyeBMoD7d7COiTQkJ+nb7ccYte60PhQXSKjQViBRANNw1BgY4/txpn1N+PgXcRTgBwT1WfMILNjuv9CQRk3OCtKUKvboZBi7ZmDiJdegVhunbm5lj0no/X7pFsYCrD/PzJ3cj1YPRNyZl4MV3DchjD7Ddoodv1W/qJ5ZoQVBgXJYM/4Ho/9qzfoRwsZMi5WnVex/eJzU99fTYTzxhmPKo9n9n68SkCT8oyTUAt3NkNg8uaVhZN94o1fkOqTSzoye8vVEywe65meSWvKVNnDBEcX48Y8I7WLtpIDWFZ51sGpOPff7q6rruD5nbyoh8= zuul@controller
Nov 25 09:31:35 compute-0 systemd[1]: libpod-03bbd3f7eada42f8158dd2042d38bcd56c844be08389a5941416edaf7e323e03.scope: Deactivated successfully.
Nov 25 09:31:35 compute-0 podman[75355]: 2025-11-25 09:31:35.767436722 +0000 UTC m=+0.329738484 container died 03bbd3f7eada42f8158dd2042d38bcd56c844be08389a5941416edaf7e323e03 (image=quay.io/ceph/ceph:v19, name=happy_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:35 compute-0 podman[75355]: 2025-11-25 09:31:35.785389996 +0000 UTC m=+0.347691748 container remove 03bbd3f7eada42f8158dd2042d38bcd56c844be08389a5941416edaf7e323e03 (image=quay.io/ceph/ceph:v19, name=happy_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:35 compute-0 systemd[1]: libpod-conmon-03bbd3f7eada42f8158dd2042d38bcd56c844be08389a5941416edaf7e323e03.scope: Deactivated successfully.
Nov 25 09:31:35 compute-0 podman[75403]: 2025-11-25 09:31:35.823663694 +0000 UTC m=+0.024009541 container create 51a8c9374d35a792c4d130b22f4439dd2c3181cd46824b81a1da09d6ac023ca7 (image=quay.io/ceph/ceph:v19, name=keen_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Nov 25 09:31:35 compute-0 systemd[1]: Started libpod-conmon-51a8c9374d35a792c4d130b22f4439dd2c3181cd46824b81a1da09d6ac023ca7.scope.
Nov 25 09:31:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f600e2a8695052771a3ac236cd99e6646238e8b9bbb02111784d5ec57d729c23/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f600e2a8695052771a3ac236cd99e6646238e8b9bbb02111784d5ec57d729c23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f600e2a8695052771a3ac236cd99e6646238e8b9bbb02111784d5ec57d729c23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b3231d98c290f3fec1a066e2307b120c3f5a5ea93ef9b47c681153142a80723-merged.mount: Deactivated successfully.
Nov 25 09:31:35 compute-0 podman[75403]: 2025-11-25 09:31:35.871646105 +0000 UTC m=+0.071991952 container init 51a8c9374d35a792c4d130b22f4439dd2c3181cd46824b81a1da09d6ac023ca7 (image=quay.io/ceph/ceph:v19, name=keen_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 25 09:31:35 compute-0 podman[75403]: 2025-11-25 09:31:35.875464573 +0000 UTC m=+0.075810420 container start 51a8c9374d35a792c4d130b22f4439dd2c3181cd46824b81a1da09d6ac023ca7 (image=quay.io/ceph/ceph:v19, name=keen_ganguly, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:35 compute-0 podman[75403]: 2025-11-25 09:31:35.876573834 +0000 UTC m=+0.076919681 container attach 51a8c9374d35a792c4d130b22f4439dd2c3181cd46824b81a1da09d6ac023ca7 (image=quay.io/ceph/ceph:v19, name=keen_ganguly, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 09:31:35 compute-0 ceph-mon[74207]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:35 compute-0 ceph-mon[74207]: [25/Nov/2025:09:31:34] ENGINE Bus STARTING
Nov 25 09:31:35 compute-0 ceph-mon[74207]: [25/Nov/2025:09:31:34] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:31:35 compute-0 ceph-mon[74207]: [25/Nov/2025:09:31:34] ENGINE Client ('192.168.122.100', 57068) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:31:35 compute-0 ceph-mon[74207]: [25/Nov/2025:09:31:34] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:31:35 compute-0 ceph-mon[74207]: [25/Nov/2025:09:31:34] ENGINE Bus STARTED
Nov 25 09:31:35 compute-0 ceph-mon[74207]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:35 compute-0 ceph-mon[74207]: Set ssh ssh_user
Nov 25 09:31:35 compute-0 ceph-mon[74207]: Set ssh ssh_config
Nov 25 09:31:35 compute-0 ceph-mon[74207]: ssh user set to ceph-admin. sudo will be used
Nov 25 09:31:35 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:35 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:35 compute-0 ceph-mon[74207]: mgrmap e8: compute-0.zcfgby(active, since 2s)
Nov 25 09:31:35 compute-0 podman[75403]: 2025-11-25 09:31:35.813906992 +0000 UTC m=+0.014252840 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:36 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:36 compute-0 sshd-session[75443]: Accepted publickey for ceph-admin from 192.168.122.100 port 54780 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:36 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 09:31:36 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 09:31:36 compute-0 systemd-logind[744]: New session 21 of user ceph-admin.
Nov 25 09:31:36 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 09:31:36 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 25 09:31:36 compute-0 systemd[75447]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:36 compute-0 systemd[75447]: Queued start job for default target Main User Target.
Nov 25 09:31:36 compute-0 systemd[75447]: Created slice User Application Slice.
Nov 25 09:31:36 compute-0 systemd[75447]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 09:31:36 compute-0 systemd[75447]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 09:31:36 compute-0 systemd[75447]: Reached target Paths.
Nov 25 09:31:36 compute-0 systemd[75447]: Reached target Timers.
Nov 25 09:31:36 compute-0 systemd[75447]: Starting D-Bus User Message Bus Socket...
Nov 25 09:31:36 compute-0 systemd[75447]: Starting Create User's Volatile Files and Directories...
Nov 25 09:31:36 compute-0 systemd[75447]: Listening on D-Bus User Message Bus Socket.
Nov 25 09:31:36 compute-0 systemd[75447]: Reached target Sockets.
Nov 25 09:31:36 compute-0 systemd[75447]: Finished Create User's Volatile Files and Directories.
Nov 25 09:31:36 compute-0 systemd[75447]: Reached target Basic System.
Nov 25 09:31:36 compute-0 systemd[75447]: Reached target Main User Target.
Nov 25 09:31:36 compute-0 systemd[75447]: Startup finished in 83ms.
Nov 25 09:31:36 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 25 09:31:36 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Nov 25 09:31:36 compute-0 sshd-session[75443]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:36 compute-0 sshd-session[75463]: Accepted publickey for ceph-admin from 192.168.122.100 port 54796 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:36 compute-0 systemd-logind[744]: New session 23 of user ceph-admin.
Nov 25 09:31:36 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 25 09:31:36 compute-0 sshd-session[75463]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:36 compute-0 sudo[75468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:36 compute-0 sudo[75468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:36 compute-0 sudo[75468]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:36 compute-0 sshd-session[75493]: Accepted publickey for ceph-admin from 192.168.122.100 port 54804 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:36 compute-0 systemd-logind[744]: New session 24 of user ceph-admin.
Nov 25 09:31:36 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 25 09:31:36 compute-0 sshd-session[75493]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:36 compute-0 sudo[75497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Nov 25 09:31:36 compute-0 sudo[75497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:36 compute-0 ceph-mgr[74476]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 09:31:36 compute-0 sudo[75497]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:36 compute-0 ceph-mon[74207]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:36 compute-0 ceph-mon[74207]: Set ssh ssh_identity_key
Nov 25 09:31:36 compute-0 ceph-mon[74207]: Set ssh private key
Nov 25 09:31:36 compute-0 ceph-mon[74207]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:36 compute-0 ceph-mon[74207]: Set ssh ssh_identity_pub
Nov 25 09:31:36 compute-0 ceph-mon[74207]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:37 compute-0 sshd-session[75522]: Accepted publickey for ceph-admin from 192.168.122.100 port 54820 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:37 compute-0 systemd-logind[744]: New session 25 of user ceph-admin.
Nov 25 09:31:37 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 25 09:31:37 compute-0 sshd-session[75522]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:37 compute-0 sudo[75526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Nov 25 09:31:37 compute-0 sudo[75526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:37 compute-0 sudo[75526]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:37 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 25 09:31:37 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 25 09:31:37 compute-0 sshd-session[75551]: Accepted publickey for ceph-admin from 192.168.122.100 port 54828 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:37 compute-0 systemd-logind[744]: New session 26 of user ceph-admin.
Nov 25 09:31:37 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 25 09:31:37 compute-0 sshd-session[75551]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:37 compute-0 sudo[75555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:37 compute-0 sudo[75555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:37 compute-0 sudo[75555]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:37 compute-0 sshd-session[75580]: Accepted publickey for ceph-admin from 192.168.122.100 port 54842 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:37 compute-0 systemd-logind[744]: New session 27 of user ceph-admin.
Nov 25 09:31:37 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 25 09:31:37 compute-0 sshd-session[75580]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019920936 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:31:37 compute-0 sudo[75584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:37 compute-0 sudo[75584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:37 compute-0 sudo[75584]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:37 compute-0 sshd-session[75609]: Accepted publickey for ceph-admin from 192.168.122.100 port 54848 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:37 compute-0 systemd-logind[744]: New session 28 of user ceph-admin.
Nov 25 09:31:37 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 25 09:31:37 compute-0 sshd-session[75609]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:37 compute-0 ceph-mon[74207]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:37 compute-0 sudo[75613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Nov 25 09:31:37 compute-0 sudo[75613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:37 compute-0 sudo[75613]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:38 compute-0 sshd-session[75638]: Accepted publickey for ceph-admin from 192.168.122.100 port 54864 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:38 compute-0 systemd-logind[744]: New session 29 of user ceph-admin.
Nov 25 09:31:38 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 25 09:31:38 compute-0 sshd-session[75638]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:38 compute-0 sudo[75642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:38 compute-0 sudo[75642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:38 compute-0 sudo[75642]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:38 compute-0 sshd-session[75667]: Accepted publickey for ceph-admin from 192.168.122.100 port 54878 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:38 compute-0 systemd-logind[744]: New session 30 of user ceph-admin.
Nov 25 09:31:38 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 25 09:31:38 compute-0 sshd-session[75667]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:38 compute-0 sudo[75671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Nov 25 09:31:38 compute-0 sudo[75671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:38 compute-0 sudo[75671]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:38 compute-0 sshd-session[75696]: Accepted publickey for ceph-admin from 192.168.122.100 port 54880 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:38 compute-0 systemd-logind[744]: New session 31 of user ceph-admin.
Nov 25 09:31:38 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 25 09:31:38 compute-0 sshd-session[75696]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:38 compute-0 ceph-mgr[74476]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 09:31:38 compute-0 ceph-mon[74207]: Deploying cephadm binary to compute-0
Nov 25 09:31:39 compute-0 sshd-session[75723]: Accepted publickey for ceph-admin from 192.168.122.100 port 54890 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:39 compute-0 systemd-logind[744]: New session 32 of user ceph-admin.
Nov 25 09:31:39 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 25 09:31:39 compute-0 sshd-session[75723]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:39 compute-0 sudo[75727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Nov 25 09:31:39 compute-0 sudo[75727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:39 compute-0 sudo[75727]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:39 compute-0 sshd-session[75752]: Accepted publickey for ceph-admin from 192.168.122.100 port 54902 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:31:39 compute-0 systemd-logind[744]: New session 33 of user ceph-admin.
Nov 25 09:31:39 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Nov 25 09:31:39 compute-0 sshd-session[75752]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:31:39 compute-0 sudo[75756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Nov 25 09:31:39 compute-0 sudo[75756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:39 compute-0 sudo[75756]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 25 09:31:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:39 compute-0 ceph-mgr[74476]: [cephadm INFO root] Added host compute-0
Nov 25 09:31:39 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 25 09:31:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 25 09:31:39 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:31:39 compute-0 keen_ganguly[75417]: Added host 'compute-0' with addr '192.168.122.100'
Nov 25 09:31:39 compute-0 systemd[1]: libpod-51a8c9374d35a792c4d130b22f4439dd2c3181cd46824b81a1da09d6ac023ca7.scope: Deactivated successfully.
Nov 25 09:31:39 compute-0 podman[75403]: 2025-11-25 09:31:39.980547477 +0000 UTC m=+4.180893324 container died 51a8c9374d35a792c4d130b22f4439dd2c3181cd46824b81a1da09d6ac023ca7 (image=quay.io/ceph/ceph:v19, name=keen_ganguly, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:31:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f600e2a8695052771a3ac236cd99e6646238e8b9bbb02111784d5ec57d729c23-merged.mount: Deactivated successfully.
Nov 25 09:31:40 compute-0 podman[75403]: 2025-11-25 09:31:40.001568302 +0000 UTC m=+4.201914150 container remove 51a8c9374d35a792c4d130b22f4439dd2c3181cd46824b81a1da09d6ac023ca7 (image=quay.io/ceph/ceph:v19, name=keen_ganguly, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:40 compute-0 systemd[1]: libpod-conmon-51a8c9374d35a792c4d130b22f4439dd2c3181cd46824b81a1da09d6ac023ca7.scope: Deactivated successfully.
Nov 25 09:31:40 compute-0 sudo[75799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:40 compute-0 sudo[75799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:40 compute-0 sudo[75799]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:40 compute-0 podman[75832]: 2025-11-25 09:31:40.042863969 +0000 UTC m=+0.027529116 container create 3c4d16f78a8be535d17ee62809f0df8efb3eea5678b2895bde8952f6b05499e6 (image=quay.io/ceph/ceph:v19, name=serene_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:40 compute-0 systemd[1]: Started libpod-conmon-3c4d16f78a8be535d17ee62809f0df8efb3eea5678b2895bde8952f6b05499e6.scope.
Nov 25 09:31:40 compute-0 sudo[75844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Nov 25 09:31:40 compute-0 sudo[75844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b02e24e45047889bf692ff2cc60326c11a6f6fc933e1a2265b23d92d4e1dc0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b02e24e45047889bf692ff2cc60326c11a6f6fc933e1a2265b23d92d4e1dc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b02e24e45047889bf692ff2cc60326c11a6f6fc933e1a2265b23d92d4e1dc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:40 compute-0 podman[75832]: 2025-11-25 09:31:40.088526487 +0000 UTC m=+0.073191634 container init 3c4d16f78a8be535d17ee62809f0df8efb3eea5678b2895bde8952f6b05499e6 (image=quay.io/ceph/ceph:v19, name=serene_zhukovsky, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:31:40 compute-0 podman[75832]: 2025-11-25 09:31:40.093296016 +0000 UTC m=+0.077961144 container start 3c4d16f78a8be535d17ee62809f0df8efb3eea5678b2895bde8952f6b05499e6 (image=quay.io/ceph/ceph:v19, name=serene_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:40 compute-0 podman[75832]: 2025-11-25 09:31:40.094710514 +0000 UTC m=+0.079375651 container attach 3c4d16f78a8be535d17ee62809f0df8efb3eea5678b2895bde8952f6b05499e6 (image=quay.io/ceph/ceph:v19, name=serene_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:40 compute-0 podman[75832]: 2025-11-25 09:31:40.031408274 +0000 UTC m=+0.016073431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:40 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:40 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 25 09:31:40 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 25 09:31:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 25 09:31:40 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:40 compute-0 serene_zhukovsky[75872]: Scheduled mon update...
Nov 25 09:31:40 compute-0 systemd[1]: libpod-3c4d16f78a8be535d17ee62809f0df8efb3eea5678b2895bde8952f6b05499e6.scope: Deactivated successfully.
Nov 25 09:31:40 compute-0 podman[75832]: 2025-11-25 09:31:40.368497945 +0000 UTC m=+0.353163082 container died 3c4d16f78a8be535d17ee62809f0df8efb3eea5678b2895bde8952f6b05499e6 (image=quay.io/ceph/ceph:v19, name=serene_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7b02e24e45047889bf692ff2cc60326c11a6f6fc933e1a2265b23d92d4e1dc0-merged.mount: Deactivated successfully.
Nov 25 09:31:40 compute-0 podman[75832]: 2025-11-25 09:31:40.388507896 +0000 UTC m=+0.373173032 container remove 3c4d16f78a8be535d17ee62809f0df8efb3eea5678b2895bde8952f6b05499e6 (image=quay.io/ceph/ceph:v19, name=serene_zhukovsky, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:40 compute-0 systemd[1]: libpod-conmon-3c4d16f78a8be535d17ee62809f0df8efb3eea5678b2895bde8952f6b05499e6.scope: Deactivated successfully.
Nov 25 09:31:40 compute-0 podman[75928]: 2025-11-25 09:31:40.431847704 +0000 UTC m=+0.026284550 container create f1ecfe8e66c3776726d1d2e9bfd58cdb618b89001891918c2ffdc5b609b86341 (image=quay.io/ceph/ceph:v19, name=quizzical_dewdney, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:40 compute-0 systemd[1]: Started libpod-conmon-f1ecfe8e66c3776726d1d2e9bfd58cdb618b89001891918c2ffdc5b609b86341.scope.
Nov 25 09:31:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d6cc3a9c7ad1db6c6c5936fefdb6b5d220402e0349598621c4bab9aced9460/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d6cc3a9c7ad1db6c6c5936fefdb6b5d220402e0349598621c4bab9aced9460/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d6cc3a9c7ad1db6c6c5936fefdb6b5d220402e0349598621c4bab9aced9460/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:40 compute-0 podman[75928]: 2025-11-25 09:31:40.483060064 +0000 UTC m=+0.077496929 container init f1ecfe8e66c3776726d1d2e9bfd58cdb618b89001891918c2ffdc5b609b86341 (image=quay.io/ceph/ceph:v19, name=quizzical_dewdney, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:31:40 compute-0 podman[75928]: 2025-11-25 09:31:40.487453826 +0000 UTC m=+0.081890673 container start f1ecfe8e66c3776726d1d2e9bfd58cdb618b89001891918c2ffdc5b609b86341 (image=quay.io/ceph/ceph:v19, name=quizzical_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:31:40 compute-0 podman[75928]: 2025-11-25 09:31:40.488617931 +0000 UTC m=+0.083054777 container attach f1ecfe8e66c3776726d1d2e9bfd58cdb618b89001891918c2ffdc5b609b86341 (image=quay.io/ceph/ceph:v19, name=quizzical_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:40 compute-0 podman[75928]: 2025-11-25 09:31:40.421043427 +0000 UTC m=+0.015480293 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:40 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:40 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 25 09:31:40 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 25 09:31:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 25 09:31:40 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:40 compute-0 quizzical_dewdney[75942]: Scheduled mgr update...
Nov 25 09:31:40 compute-0 systemd[1]: libpod-f1ecfe8e66c3776726d1d2e9bfd58cdb618b89001891918c2ffdc5b609b86341.scope: Deactivated successfully.
Nov 25 09:31:40 compute-0 podman[75928]: 2025-11-25 09:31:40.762512464 +0000 UTC m=+0.356949330 container died f1ecfe8e66c3776726d1d2e9bfd58cdb618b89001891918c2ffdc5b609b86341 (image=quay.io/ceph/ceph:v19, name=quizzical_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:31:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-17d6cc3a9c7ad1db6c6c5936fefdb6b5d220402e0349598621c4bab9aced9460-merged.mount: Deactivated successfully.
Nov 25 09:31:40 compute-0 podman[75928]: 2025-11-25 09:31:40.781937431 +0000 UTC m=+0.376374278 container remove f1ecfe8e66c3776726d1d2e9bfd58cdb618b89001891918c2ffdc5b609b86341 (image=quay.io/ceph/ceph:v19, name=quizzical_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 09:31:40 compute-0 systemd[1]: libpod-conmon-f1ecfe8e66c3776726d1d2e9bfd58cdb618b89001891918c2ffdc5b609b86341.scope: Deactivated successfully.
Nov 25 09:31:40 compute-0 ceph-mgr[74476]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 09:31:40 compute-0 podman[75976]: 2025-11-25 09:31:40.83052862 +0000 UTC m=+0.034324455 container create 5c481f81a341a6e1152c290059684a86735ce201e3dee2d36db5f4c647cc0e77 (image=quay.io/ceph/ceph:v19, name=quirky_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:31:40 compute-0 systemd[1]: Started libpod-conmon-5c481f81a341a6e1152c290059684a86735ce201e3dee2d36db5f4c647cc0e77.scope.
Nov 25 09:31:40 compute-0 podman[75906]: 2025-11-25 09:31:40.861278477 +0000 UTC m=+0.616986121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4348334572fedbdae1f6388e0ba0cc3bdc572c3f499bbe298386f2938dc23b8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4348334572fedbdae1f6388e0ba0cc3bdc572c3f499bbe298386f2938dc23b8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4348334572fedbdae1f6388e0ba0cc3bdc572c3f499bbe298386f2938dc23b8a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:40 compute-0 podman[75976]: 2025-11-25 09:31:40.872880116 +0000 UTC m=+0.076675961 container init 5c481f81a341a6e1152c290059684a86735ce201e3dee2d36db5f4c647cc0e77 (image=quay.io/ceph/ceph:v19, name=quirky_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:31:40 compute-0 podman[75976]: 2025-11-25 09:31:40.878508977 +0000 UTC m=+0.082304812 container start 5c481f81a341a6e1152c290059684a86735ce201e3dee2d36db5f4c647cc0e77 (image=quay.io/ceph/ceph:v19, name=quirky_hermann, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:40 compute-0 podman[75976]: 2025-11-25 09:31:40.879661119 +0000 UTC m=+0.083456953 container attach 5c481f81a341a6e1152c290059684a86735ce201e3dee2d36db5f4c647cc0e77 (image=quay.io/ceph/ceph:v19, name=quirky_hermann, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 25 09:31:40 compute-0 podman[75976]: 2025-11-25 09:31:40.81841281 +0000 UTC m=+0.022208666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:40 compute-0 podman[76005]: 2025-11-25 09:31:40.93323004 +0000 UTC m=+0.026030080 container create 310a0c5e3d2f11f3d4daf1db7bdf421115f3e4115bb1ce4214ea69a31d9eb9dc (image=quay.io/ceph/ceph:v19, name=vibrant_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:31:40 compute-0 systemd[1]: Started libpod-conmon-310a0c5e3d2f11f3d4daf1db7bdf421115f3e4115bb1ce4214ea69a31d9eb9dc.scope.
Nov 25 09:31:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:40 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:40 compute-0 ceph-mon[74207]: Added host compute-0
Nov 25 09:31:40 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:31:40 compute-0 ceph-mon[74207]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:40 compute-0 ceph-mon[74207]: Saving service mon spec with placement count:5
Nov 25 09:31:40 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:40 compute-0 ceph-mon[74207]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:40 compute-0 ceph-mon[74207]: Saving service mgr spec with placement count:2
Nov 25 09:31:40 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:40 compute-0 podman[76005]: 2025-11-25 09:31:40.969117842 +0000 UTC m=+0.061917891 container init 310a0c5e3d2f11f3d4daf1db7bdf421115f3e4115bb1ce4214ea69a31d9eb9dc (image=quay.io/ceph/ceph:v19, name=vibrant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 25 09:31:40 compute-0 podman[76005]: 2025-11-25 09:31:40.973401828 +0000 UTC m=+0.066201867 container start 310a0c5e3d2f11f3d4daf1db7bdf421115f3e4115bb1ce4214ea69a31d9eb9dc (image=quay.io/ceph/ceph:v19, name=vibrant_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 09:31:40 compute-0 podman[76005]: 2025-11-25 09:31:40.974504847 +0000 UTC m=+0.067304886 container attach 310a0c5e3d2f11f3d4daf1db7bdf421115f3e4115bb1ce4214ea69a31d9eb9dc (image=quay.io/ceph/ceph:v19, name=vibrant_jang, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Nov 25 09:31:41 compute-0 podman[76005]: 2025-11-25 09:31:40.922533697 +0000 UTC m=+0.015333746 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:41 compute-0 vibrant_jang[76020]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Nov 25 09:31:41 compute-0 systemd[1]: libpod-310a0c5e3d2f11f3d4daf1db7bdf421115f3e4115bb1ce4214ea69a31d9eb9dc.scope: Deactivated successfully.
Nov 25 09:31:41 compute-0 podman[76005]: 2025-11-25 09:31:41.05051852 +0000 UTC m=+0.143318558 container died 310a0c5e3d2f11f3d4daf1db7bdf421115f3e4115bb1ce4214ea69a31d9eb9dc (image=quay.io/ceph/ceph:v19, name=vibrant_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfee0a6379f6ef6474c9f9dc9638fa72a3b43e978c2be54c3d44e990c551aa5b-merged.mount: Deactivated successfully.
Nov 25 09:31:41 compute-0 podman[76005]: 2025-11-25 09:31:41.071594449 +0000 UTC m=+0.164394487 container remove 310a0c5e3d2f11f3d4daf1db7bdf421115f3e4115bb1ce4214ea69a31d9eb9dc (image=quay.io/ceph/ceph:v19, name=vibrant_jang, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:31:41 compute-0 systemd[1]: libpod-conmon-310a0c5e3d2f11f3d4daf1db7bdf421115f3e4115bb1ce4214ea69a31d9eb9dc.scope: Deactivated successfully.
Nov 25 09:31:41 compute-0 sudo[75844]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Nov 25 09:31:41 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:41 compute-0 sudo[76051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:41 compute-0 sudo[76051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:41 compute-0 sudo[76051]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:41 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:41 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service crash spec with placement *
Nov 25 09:31:41 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 25 09:31:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 25 09:31:41 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:41 compute-0 quirky_hermann[75989]: Scheduled crash update...
Nov 25 09:31:41 compute-0 sudo[76076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 25 09:31:41 compute-0 sudo[76076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:41 compute-0 systemd[1]: libpod-5c481f81a341a6e1152c290059684a86735ce201e3dee2d36db5f4c647cc0e77.scope: Deactivated successfully.
Nov 25 09:31:41 compute-0 podman[75976]: 2025-11-25 09:31:41.195054802 +0000 UTC m=+0.398850637 container died 5c481f81a341a6e1152c290059684a86735ce201e3dee2d36db5f4c647cc0e77 (image=quay.io/ceph/ceph:v19, name=quirky_hermann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 25 09:31:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4348334572fedbdae1f6388e0ba0cc3bdc572c3f499bbe298386f2938dc23b8a-merged.mount: Deactivated successfully.
Nov 25 09:31:41 compute-0 podman[75976]: 2025-11-25 09:31:41.215839301 +0000 UTC m=+0.419635137 container remove 5c481f81a341a6e1152c290059684a86735ce201e3dee2d36db5f4c647cc0e77 (image=quay.io/ceph/ceph:v19, name=quirky_hermann, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:41 compute-0 systemd[1]: libpod-conmon-5c481f81a341a6e1152c290059684a86735ce201e3dee2d36db5f4c647cc0e77.scope: Deactivated successfully.
Nov 25 09:31:41 compute-0 podman[76113]: 2025-11-25 09:31:41.259482082 +0000 UTC m=+0.026666661 container create 64e920f271faec8b798b3aa2179e3ef9d7b2bc12475338f5b833f3d7682b898b (image=quay.io/ceph/ceph:v19, name=nostalgic_hermann, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:31:41 compute-0 systemd[1]: Started libpod-conmon-64e920f271faec8b798b3aa2179e3ef9d7b2bc12475338f5b833f3d7682b898b.scope.
Nov 25 09:31:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c83e8b03a2b758672d21ada1977bfacc37d8034e34b7f8e0036c484d8d274c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c83e8b03a2b758672d21ada1977bfacc37d8034e34b7f8e0036c484d8d274c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c83e8b03a2b758672d21ada1977bfacc37d8034e34b7f8e0036c484d8d274c6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:41 compute-0 podman[76113]: 2025-11-25 09:31:41.330470721 +0000 UTC m=+0.097655299 container init 64e920f271faec8b798b3aa2179e3ef9d7b2bc12475338f5b833f3d7682b898b (image=quay.io/ceph/ceph:v19, name=nostalgic_hermann, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:41 compute-0 podman[76113]: 2025-11-25 09:31:41.334648617 +0000 UTC m=+0.101833195 container start 64e920f271faec8b798b3aa2179e3ef9d7b2bc12475338f5b833f3d7682b898b (image=quay.io/ceph/ceph:v19, name=nostalgic_hermann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:31:41 compute-0 podman[76113]: 2025-11-25 09:31:41.335856383 +0000 UTC m=+0.103040962 container attach 64e920f271faec8b798b3aa2179e3ef9d7b2bc12475338f5b833f3d7682b898b (image=quay.io/ceph/ceph:v19, name=nostalgic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 09:31:41 compute-0 podman[76113]: 2025-11-25 09:31:41.248103883 +0000 UTC m=+0.015288481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:41 compute-0 sudo[76076]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:31:41 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:41 compute-0 sudo[76168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:41 compute-0 sudo[76168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:41 compute-0 sudo[76168]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:41 compute-0 sudo[76193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:31:41 compute-0 sudo[76193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Nov 25 09:31:41 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3554016393' entity='client.admin' 
Nov 25 09:31:41 compute-0 systemd[1]: libpod-64e920f271faec8b798b3aa2179e3ef9d7b2bc12475338f5b833f3d7682b898b.scope: Deactivated successfully.
Nov 25 09:31:41 compute-0 podman[76113]: 2025-11-25 09:31:41.624214422 +0000 UTC m=+0.391399010 container died 64e920f271faec8b798b3aa2179e3ef9d7b2bc12475338f5b833f3d7682b898b (image=quay.io/ceph/ceph:v19, name=nostalgic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c83e8b03a2b758672d21ada1977bfacc37d8034e34b7f8e0036c484d8d274c6-merged.mount: Deactivated successfully.
Nov 25 09:31:41 compute-0 podman[76113]: 2025-11-25 09:31:41.646756957 +0000 UTC m=+0.413941535 container remove 64e920f271faec8b798b3aa2179e3ef9d7b2bc12475338f5b833f3d7682b898b (image=quay.io/ceph/ceph:v19, name=nostalgic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:31:41 compute-0 systemd[1]: libpod-conmon-64e920f271faec8b798b3aa2179e3ef9d7b2bc12475338f5b833f3d7682b898b.scope: Deactivated successfully.
Nov 25 09:31:41 compute-0 podman[76230]: 2025-11-25 09:31:41.690579175 +0000 UTC m=+0.027778165 container create 1d96d37889f8218ac00c164f54409b8d6ff0be494230a48ec38806c9f532d74e (image=quay.io/ceph/ceph:v19, name=relaxed_bhaskara, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:31:41 compute-0 systemd[1]: Started libpod-conmon-1d96d37889f8218ac00c164f54409b8d6ff0be494230a48ec38806c9f532d74e.scope.
Nov 25 09:31:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e512ca2d3a63c508e529007e443b274baf51c997be606cb223a05d23280543e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e512ca2d3a63c508e529007e443b274baf51c997be606cb223a05d23280543e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e512ca2d3a63c508e529007e443b274baf51c997be606cb223a05d23280543e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:41 compute-0 podman[76230]: 2025-11-25 09:31:41.734296907 +0000 UTC m=+0.071495897 container init 1d96d37889f8218ac00c164f54409b8d6ff0be494230a48ec38806c9f532d74e (image=quay.io/ceph/ceph:v19, name=relaxed_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 09:31:41 compute-0 podman[76230]: 2025-11-25 09:31:41.738068505 +0000 UTC m=+0.075267496 container start 1d96d37889f8218ac00c164f54409b8d6ff0be494230a48ec38806c9f532d74e (image=quay.io/ceph/ceph:v19, name=relaxed_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:31:41 compute-0 podman[76230]: 2025-11-25 09:31:41.739300639 +0000 UTC m=+0.076499649 container attach 1d96d37889f8218ac00c164f54409b8d6ff0be494230a48ec38806c9f532d74e (image=quay.io/ceph/ceph:v19, name=relaxed_bhaskara, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:41 compute-0 podman[76230]: 2025-11-25 09:31:41.679069349 +0000 UTC m=+0.016268338 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:41 compute-0 podman[76324]: 2025-11-25 09:31:41.904075372 +0000 UTC m=+0.032693601 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:41 compute-0 podman[76324]: 2025-11-25 09:31:41.983066467 +0000 UTC m=+0.111684696 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:31:42 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Nov 25 09:31:42 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:42 compute-0 systemd[1]: libpod-1d96d37889f8218ac00c164f54409b8d6ff0be494230a48ec38806c9f532d74e.scope: Deactivated successfully.
Nov 25 09:31:42 compute-0 podman[76230]: 2025-11-25 09:31:42.02221204 +0000 UTC m=+0.359411029 container died 1d96d37889f8218ac00c164f54409b8d6ff0be494230a48ec38806c9f532d74e (image=quay.io/ceph/ceph:v19, name=relaxed_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:31:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e512ca2d3a63c508e529007e443b274baf51c997be606cb223a05d23280543e5-merged.mount: Deactivated successfully.
Nov 25 09:31:42 compute-0 podman[76230]: 2025-11-25 09:31:42.04152165 +0000 UTC m=+0.378720640 container remove 1d96d37889f8218ac00c164f54409b8d6ff0be494230a48ec38806c9f532d74e (image=quay.io/ceph/ceph:v19, name=relaxed_bhaskara, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:42 compute-0 systemd[1]: libpod-conmon-1d96d37889f8218ac00c164f54409b8d6ff0be494230a48ec38806c9f532d74e.scope: Deactivated successfully.
Nov 25 09:31:42 compute-0 sudo[76193]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:31:42 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:42 compute-0 podman[76376]: 2025-11-25 09:31:42.08943526 +0000 UTC m=+0.028567943 container create bc67fb6af41ecc5b824234b98b1f2edb194676475e1edb9f89ab08bc9c150f28 (image=quay.io/ceph/ceph:v19, name=agitated_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:31:42 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:42 compute-0 ceph-mon[74207]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:42 compute-0 ceph-mon[74207]: Saving service crash spec with placement *
Nov 25 09:31:42 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:42 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:42 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3554016393' entity='client.admin' 
Nov 25 09:31:42 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:42 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:42 compute-0 systemd[1]: Started libpod-conmon-bc67fb6af41ecc5b824234b98b1f2edb194676475e1edb9f89ab08bc9c150f28.scope.
Nov 25 09:31:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd71d7f1c06d276b8c023cffa0113fdfa0bf1195ec2e376a812799054242b0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd71d7f1c06d276b8c023cffa0113fdfa0bf1195ec2e376a812799054242b0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd71d7f1c06d276b8c023cffa0113fdfa0bf1195ec2e376a812799054242b0c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:42 compute-0 sudo[76386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:42 compute-0 sudo[76386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:42 compute-0 sudo[76386]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:42 compute-0 podman[76376]: 2025-11-25 09:31:42.139924938 +0000 UTC m=+0.079057631 container init bc67fb6af41ecc5b824234b98b1f2edb194676475e1edb9f89ab08bc9c150f28 (image=quay.io/ceph/ceph:v19, name=agitated_elion, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:42 compute-0 podman[76376]: 2025-11-25 09:31:42.145504165 +0000 UTC m=+0.084636828 container start bc67fb6af41ecc5b824234b98b1f2edb194676475e1edb9f89ab08bc9c150f28 (image=quay.io/ceph/ceph:v19, name=agitated_elion, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:42 compute-0 podman[76376]: 2025-11-25 09:31:42.146701702 +0000 UTC m=+0.085834376 container attach bc67fb6af41ecc5b824234b98b1f2edb194676475e1edb9f89ab08bc9c150f28 (image=quay.io/ceph/ceph:v19, name=agitated_elion, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:42 compute-0 podman[76376]: 2025-11-25 09:31:42.076556073 +0000 UTC m=+0.015688766 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:42 compute-0 sudo[76418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:31:42 compute-0 sudo[76418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:42 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76473 (sysctl)
Nov 25 09:31:42 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 25 09:31:42 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 25 09:31:42 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 25 09:31:42 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:42 compute-0 ceph-mgr[74476]: [cephadm INFO root] Added label _admin to host compute-0
Nov 25 09:31:42 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 25 09:31:42 compute-0 agitated_elion[76407]: Added label _admin to host compute-0
Nov 25 09:31:42 compute-0 systemd[1]: libpod-bc67fb6af41ecc5b824234b98b1f2edb194676475e1edb9f89ab08bc9c150f28.scope: Deactivated successfully.
Nov 25 09:31:42 compute-0 podman[76376]: 2025-11-25 09:31:42.42816332 +0000 UTC m=+0.367295994 container died bc67fb6af41ecc5b824234b98b1f2edb194676475e1edb9f89ab08bc9c150f28 (image=quay.io/ceph/ceph:v19, name=agitated_elion, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 09:31:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bd71d7f1c06d276b8c023cffa0113fdfa0bf1195ec2e376a812799054242b0c-merged.mount: Deactivated successfully.
Nov 25 09:31:42 compute-0 podman[76376]: 2025-11-25 09:31:42.449428137 +0000 UTC m=+0.388560810 container remove bc67fb6af41ecc5b824234b98b1f2edb194676475e1edb9f89ab08bc9c150f28 (image=quay.io/ceph/ceph:v19, name=agitated_elion, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:42 compute-0 systemd[1]: libpod-conmon-bc67fb6af41ecc5b824234b98b1f2edb194676475e1edb9f89ab08bc9c150f28.scope: Deactivated successfully.
Nov 25 09:31:42 compute-0 podman[76491]: 2025-11-25 09:31:42.492439807 +0000 UTC m=+0.027938768 container create 777780c3eada828bc2a92b535927bdbee3bbded0a5923e2eb08f21c257f97377 (image=quay.io/ceph/ceph:v19, name=eloquent_wing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:31:42 compute-0 systemd[1]: Started libpod-conmon-777780c3eada828bc2a92b535927bdbee3bbded0a5923e2eb08f21c257f97377.scope.
Nov 25 09:31:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2de4711aa994b81846fc6aed75fbdbe8b860525b79cb4a4fa3dbfb286fd864c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2de4711aa994b81846fc6aed75fbdbe8b860525b79cb4a4fa3dbfb286fd864c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2de4711aa994b81846fc6aed75fbdbe8b860525b79cb4a4fa3dbfb286fd864c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:42 compute-0 podman[76491]: 2025-11-25 09:31:42.546840026 +0000 UTC m=+0.082338997 container init 777780c3eada828bc2a92b535927bdbee3bbded0a5923e2eb08f21c257f97377 (image=quay.io/ceph/ceph:v19, name=eloquent_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:42 compute-0 podman[76491]: 2025-11-25 09:31:42.552490167 +0000 UTC m=+0.087989128 container start 777780c3eada828bc2a92b535927bdbee3bbded0a5923e2eb08f21c257f97377 (image=quay.io/ceph/ceph:v19, name=eloquent_wing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:31:42 compute-0 podman[76491]: 2025-11-25 09:31:42.553763618 +0000 UTC m=+0.089262578 container attach 777780c3eada828bc2a92b535927bdbee3bbded0a5923e2eb08f21c257f97377 (image=quay.io/ceph/ceph:v19, name=eloquent_wing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:42 compute-0 podman[76491]: 2025-11-25 09:31:42.481026904 +0000 UTC m=+0.016525884 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:42 compute-0 sudo[76418]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053019 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:31:42 compute-0 sudo[76527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:42 compute-0 sudo[76527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:42 compute-0 sudo[76527]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:42 compute-0 sudo[76571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 25 09:31:42 compute-0 sudo[76571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:42 compute-0 ceph-mgr[74476]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 09:31:42 compute-0 sudo[76571]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:31:42 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Nov 25 09:31:42 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2729259365' entity='client.admin' 
Nov 25 09:31:42 compute-0 eloquent_wing[76510]: set mgr/dashboard/cluster/status
Nov 25 09:31:42 compute-0 systemd[1]: libpod-777780c3eada828bc2a92b535927bdbee3bbded0a5923e2eb08f21c257f97377.scope: Deactivated successfully.
Nov 25 09:31:42 compute-0 conmon[76510]: conmon 777780c3eada828bc2a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-777780c3eada828bc2a92b535927bdbee3bbded0a5923e2eb08f21c257f97377.scope/container/memory.events
Nov 25 09:31:42 compute-0 podman[76491]: 2025-11-25 09:31:42.914250575 +0000 UTC m=+0.449749536 container died 777780c3eada828bc2a92b535927bdbee3bbded0a5923e2eb08f21c257f97377 (image=quay.io/ceph/ceph:v19, name=eloquent_wing, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:42 compute-0 podman[76491]: 2025-11-25 09:31:42.935685831 +0000 UTC m=+0.471184793 container remove 777780c3eada828bc2a92b535927bdbee3bbded0a5923e2eb08f21c257f97377 (image=quay.io/ceph/ceph:v19, name=eloquent_wing, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:42 compute-0 sudo[76613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:42 compute-0 systemd[1]: libpod-conmon-777780c3eada828bc2a92b535927bdbee3bbded0a5923e2eb08f21c257f97377.scope: Deactivated successfully.
Nov 25 09:31:42 compute-0 sudo[76613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:42 compute-0 sudo[76613]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:42 compute-0 sudo[73273]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:42 compute-0 sudo[76647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- inventory --format=json-pretty --filter-for-batch
Nov 25 09:31:42 compute-0 sudo[76647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:43 compute-0 sudo[76721]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuqprabvldxpfcwffoqcmcadsksjukga ; /usr/bin/python3'
Nov 25 09:31:43 compute-0 sudo[76721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:31:43 compute-0 podman[76728]: 2025-11-25 09:31:43.256836557 +0000 UTC m=+0.027073407 container create a8beca418f8c6d81f7bd2026caedbf446621548a26a3e3beaef1f683434a4c5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_diffie, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:43 compute-0 systemd[1]: Started libpod-conmon-a8beca418f8c6d81f7bd2026caedbf446621548a26a3e3beaef1f683434a4c5c.scope.
Nov 25 09:31:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:43 compute-0 podman[76728]: 2025-11-25 09:31:43.304409275 +0000 UTC m=+0.074646125 container init a8beca418f8c6d81f7bd2026caedbf446621548a26a3e3beaef1f683434a4c5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_diffie, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:31:43 compute-0 podman[76728]: 2025-11-25 09:31:43.308484567 +0000 UTC m=+0.078721397 container start a8beca418f8c6d81f7bd2026caedbf446621548a26a3e3beaef1f683434a4c5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_diffie, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:43 compute-0 angry_diffie[76741]: 167 167
Nov 25 09:31:43 compute-0 podman[76728]: 2025-11-25 09:31:43.311556208 +0000 UTC m=+0.081793068 container attach a8beca418f8c6d81f7bd2026caedbf446621548a26a3e3beaef1f683434a4c5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 25 09:31:43 compute-0 systemd[1]: libpod-a8beca418f8c6d81f7bd2026caedbf446621548a26a3e3beaef1f683434a4c5c.scope: Deactivated successfully.
Nov 25 09:31:43 compute-0 podman[76728]: 2025-11-25 09:31:43.312074344 +0000 UTC m=+0.082311185 container died a8beca418f8c6d81f7bd2026caedbf446621548a26a3e3beaef1f683434a4c5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_diffie, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 25 09:31:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-304dbc3e6e16b64b914b1f24d21b469038918c866c414a044e5bac58205dce2c-merged.mount: Deactivated successfully.
Nov 25 09:31:43 compute-0 podman[76728]: 2025-11-25 09:31:43.330743998 +0000 UTC m=+0.100980838 container remove a8beca418f8c6d81f7bd2026caedbf446621548a26a3e3beaef1f683434a4c5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_diffie, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:31:43 compute-0 podman[76728]: 2025-11-25 09:31:43.246477589 +0000 UTC m=+0.016714449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:31:43 compute-0 systemd[1]: libpod-conmon-a8beca418f8c6d81f7bd2026caedbf446621548a26a3e3beaef1f683434a4c5c.scope: Deactivated successfully.
Nov 25 09:31:43 compute-0 python3[76727]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:31:43 compute-0 podman[76758]: 2025-11-25 09:31:43.380930955 +0000 UTC m=+0.026065607 container create 0304293da514f13877788e097dd28ea9c36fe718b9937bca7c593b9339dd579a (image=quay.io/ceph/ceph:v19, name=tender_gauss, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 09:31:43 compute-0 systemd[1]: Started libpod-conmon-0304293da514f13877788e097dd28ea9c36fe718b9937bca7c593b9339dd579a.scope.
Nov 25 09:31:43 compute-0 ceph-mon[74207]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:43 compute-0 ceph-mon[74207]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:43 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:43 compute-0 ceph-mon[74207]: Added label _admin to host compute-0
Nov 25 09:31:43 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:43 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2729259365' entity='client.admin' 
Nov 25 09:31:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bc82acbf6aaf6d59a90bfca71255212ba33e7fbb46880bc21d846444d9b0281/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bc82acbf6aaf6d59a90bfca71255212ba33e7fbb46880bc21d846444d9b0281/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:43 compute-0 podman[76758]: 2025-11-25 09:31:43.42684696 +0000 UTC m=+0.071981612 container init 0304293da514f13877788e097dd28ea9c36fe718b9937bca7c593b9339dd579a (image=quay.io/ceph/ceph:v19, name=tender_gauss, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:43 compute-0 podman[76758]: 2025-11-25 09:31:43.430998376 +0000 UTC m=+0.076133018 container start 0304293da514f13877788e097dd28ea9c36fe718b9937bca7c593b9339dd579a (image=quay.io/ceph/ceph:v19, name=tender_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:31:43 compute-0 podman[76758]: 2025-11-25 09:31:43.43224138 +0000 UTC m=+0.077376032 container attach 0304293da514f13877788e097dd28ea9c36fe718b9937bca7c593b9339dd579a (image=quay.io/ceph/ceph:v19, name=tender_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:43 compute-0 podman[76778]: 2025-11-25 09:31:43.456985444 +0000 UTC m=+0.027556749 container create 9af987ad69e89dc5fd8f13a5daa5427fb3f1263c87dabb6f3637d4e4b4294710 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_borg, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:31:43 compute-0 podman[76758]: 2025-11-25 09:31:43.370302799 +0000 UTC m=+0.015437461 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:43 compute-0 systemd[1]: Started libpod-conmon-9af987ad69e89dc5fd8f13a5daa5427fb3f1263c87dabb6f3637d4e4b4294710.scope.
Nov 25 09:31:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1f6647e4f7e6e8930c454643ef1d735520d2b0d52941998c398c97fd32fa56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1f6647e4f7e6e8930c454643ef1d735520d2b0d52941998c398c97fd32fa56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1f6647e4f7e6e8930c454643ef1d735520d2b0d52941998c398c97fd32fa56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1f6647e4f7e6e8930c454643ef1d735520d2b0d52941998c398c97fd32fa56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:43 compute-0 podman[76778]: 2025-11-25 09:31:43.516707885 +0000 UTC m=+0.087279200 container init 9af987ad69e89dc5fd8f13a5daa5427fb3f1263c87dabb6f3637d4e4b4294710 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:43 compute-0 podman[76778]: 2025-11-25 09:31:43.521928987 +0000 UTC m=+0.092500282 container start 9af987ad69e89dc5fd8f13a5daa5427fb3f1263c87dabb6f3637d4e4b4294710 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_borg, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:31:43 compute-0 podman[76778]: 2025-11-25 09:31:43.524618608 +0000 UTC m=+0.095189903 container attach 9af987ad69e89dc5fd8f13a5daa5427fb3f1263c87dabb6f3637d4e4b4294710 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_borg, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:31:43 compute-0 podman[76778]: 2025-11-25 09:31:43.445876352 +0000 UTC m=+0.016447657 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:31:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Nov 25 09:31:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1538226595' entity='client.admin' 
Nov 25 09:31:43 compute-0 systemd[1]: libpod-0304293da514f13877788e097dd28ea9c36fe718b9937bca7c593b9339dd579a.scope: Deactivated successfully.
Nov 25 09:31:43 compute-0 podman[76823]: 2025-11-25 09:31:43.738816759 +0000 UTC m=+0.015649563 container died 0304293da514f13877788e097dd28ea9c36fe718b9937bca7c593b9339dd579a (image=quay.io/ceph/ceph:v19, name=tender_gauss, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:31:43 compute-0 podman[76823]: 2025-11-25 09:31:43.760316888 +0000 UTC m=+0.037149692 container remove 0304293da514f13877788e097dd28ea9c36fe718b9937bca7c593b9339dd579a (image=quay.io/ceph/ceph:v19, name=tender_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:43 compute-0 systemd[1]: libpod-conmon-0304293da514f13877788e097dd28ea9c36fe718b9937bca7c593b9339dd579a.scope: Deactivated successfully.
Nov 25 09:31:43 compute-0 sudo[76721]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bc82acbf6aaf6d59a90bfca71255212ba33e7fbb46880bc21d846444d9b0281-merged.mount: Deactivated successfully.
Nov 25 09:31:44 compute-0 relaxed_borg[76792]: [
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:     {
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:         "available": false,
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:         "being_replaced": false,
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:         "ceph_device_lvm": false,
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:         "lsm_data": {},
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:         "lvs": [],
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:         "path": "/dev/sr0",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:         "rejected_reasons": [
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "Insufficient space (<5GB)",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "Has a FileSystem"
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:         ],
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:         "sys_api": {
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "actuators": null,
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "device_nodes": [
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:                 "sr0"
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             ],
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "devname": "sr0",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "human_readable_size": "474.00 KB",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "id_bus": "ata",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "model": "QEMU DVD-ROM",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "nr_requests": "64",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "parent": "/dev/sr0",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "partitions": {},
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "path": "/dev/sr0",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "removable": "1",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "rev": "2.5+",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "ro": "0",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "rotational": "1",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "sas_address": "",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "sas_device_handle": "",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "scheduler_mode": "mq-deadline",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "sectors": 0,
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "sectorsize": "2048",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "size": 485376.0,
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "support_discard": "2048",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "type": "disk",
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:             "vendor": "QEMU"
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:         }
Nov 25 09:31:44 compute-0 relaxed_borg[76792]:     }
Nov 25 09:31:44 compute-0 relaxed_borg[76792]: ]
Nov 25 09:31:44 compute-0 systemd[1]: libpod-9af987ad69e89dc5fd8f13a5daa5427fb3f1263c87dabb6f3637d4e4b4294710.scope: Deactivated successfully.
Nov 25 09:31:44 compute-0 podman[76778]: 2025-11-25 09:31:44.078234929 +0000 UTC m=+0.648806224 container died 9af987ad69e89dc5fd8f13a5daa5427fb3f1263c87dabb6f3637d4e4b4294710 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:31:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb1f6647e4f7e6e8930c454643ef1d735520d2b0d52941998c398c97fd32fa56-merged.mount: Deactivated successfully.
Nov 25 09:31:44 compute-0 podman[76778]: 2025-11-25 09:31:44.102291698 +0000 UTC m=+0.672862984 container remove 9af987ad69e89dc5fd8f13a5daa5427fb3f1263c87dabb6f3637d4e4b4294710 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:44 compute-0 systemd[1]: libpod-conmon-9af987ad69e89dc5fd8f13a5daa5427fb3f1263c87dabb6f3637d4e4b4294710.scope: Deactivated successfully.
Nov 25 09:31:44 compute-0 sudo[76647]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:31:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:31:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:31:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:31:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 25 09:31:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:31:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:31:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:31:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:31:44 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:31:44 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:31:44 compute-0 sudo[78058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:31:44 compute-0 sudo[78058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78058]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 sudo[78083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:31:44 compute-0 sudo[78083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78083]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 sudo[78108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:31:44 compute-0 sudo[78108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78108]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 sudo[78156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:44 compute-0 sudo[78156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78156]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 sudo[78205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:31:44 compute-0 sudo[78205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78205]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 sudo[78254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxygvljzayeilesljclfhrmxhztccqsx ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063104.0360618-37451-203987839595338/async_wrapper.py j869286043895 30 /home/zuul/.ansible/tmp/ansible-tmp-1764063104.0360618-37451-203987839595338/AnsiballZ_command.py _'
Nov 25 09:31:44 compute-0 sudo[78254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:31:44 compute-0 sudo[78281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:31:44 compute-0 sudo[78281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78281]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 sudo[78306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:31:44 compute-0 sudo[78306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78306]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 ansible-async_wrapper.py[78261]: Invoked with j869286043895 30 /home/zuul/.ansible/tmp/ansible-tmp-1764063104.0360618-37451-203987839595338/AnsiballZ_command.py _
Nov 25 09:31:44 compute-0 ansible-async_wrapper.py[78345]: Starting module and watcher
Nov 25 09:31:44 compute-0 ansible-async_wrapper.py[78345]: Start watching 78351 (30)
Nov 25 09:31:44 compute-0 ansible-async_wrapper.py[78351]: Start module (78351)
Nov 25 09:31:44 compute-0 ansible-async_wrapper.py[78261]: Return async_wrapper task started.
Nov 25 09:31:44 compute-0 sudo[78254]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 sudo[78331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 25 09:31:44 compute-0 sudo[78331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78331]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:31:44 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:31:44 compute-0 sudo[78361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:31:44 compute-0 sudo[78361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78361]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 sudo[78386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:31:44 compute-0 sudo[78386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78386]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 python3[78352]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:31:44 compute-0 podman[78411]: 2025-11-25 09:31:44.655382198 +0000 UTC m=+0.027551858 container create 390353eb66f1a153efd607c38184c2b46c3d649d49b36d6ac51390d97dac40f9 (image=quay.io/ceph/ceph:v19, name=hungry_spence, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:44 compute-0 sudo[78412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:31:44 compute-0 sudo[78412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78412]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 systemd[1]: Started libpod-conmon-390353eb66f1a153efd607c38184c2b46c3d649d49b36d6ac51390d97dac40f9.scope.
Nov 25 09:31:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:44 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1538226595' entity='client.admin' 
Nov 25 09:31:44 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:44 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:44 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:44 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:44 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:31:44 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:44 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ce23c6a31c13aaa1324aa5e424d1f982ccd6fe0e3da9cf516ce8f18d1a2a45/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ce23c6a31c13aaa1324aa5e424d1f982ccd6fe0e3da9cf516ce8f18d1a2a45/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:44 compute-0 podman[78411]: 2025-11-25 09:31:44.704567917 +0000 UTC m=+0.076737587 container init 390353eb66f1a153efd607c38184c2b46c3d649d49b36d6ac51390d97dac40f9 (image=quay.io/ceph/ceph:v19, name=hungry_spence, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:44 compute-0 sudo[78449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:44 compute-0 podman[78411]: 2025-11-25 09:31:44.709963248 +0000 UTC m=+0.082132908 container start 390353eb66f1a153efd607c38184c2b46c3d649d49b36d6ac51390d97dac40f9 (image=quay.io/ceph/ceph:v19, name=hungry_spence, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:44 compute-0 podman[78411]: 2025-11-25 09:31:44.711821792 +0000 UTC m=+0.083991452 container attach 390353eb66f1a153efd607c38184c2b46c3d649d49b36d6ac51390d97dac40f9 (image=quay.io/ceph/ceph:v19, name=hungry_spence, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:31:44 compute-0 sudo[78449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78449]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 podman[78411]: 2025-11-25 09:31:44.644438839 +0000 UTC m=+0.016608510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:44 compute-0 sudo[78478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:31:44 compute-0 sudo[78478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78478]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 sudo[78545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:31:44 compute-0 sudo[78545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 ceph-mgr[74476]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 09:31:44 compute-0 sudo[78545]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 sudo[78570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:31:44 compute-0 sudo[78570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78570]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 sudo[78595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:31:44 compute-0 sudo[78595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78595]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:31:44 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:31:44 compute-0 sudo[78620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:31:44 compute-0 sudo[78620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78620]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:44 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:31:44 compute-0 hungry_spence[78455]: 
Nov 25 09:31:44 compute-0 hungry_spence[78455]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 09:31:44 compute-0 systemd[1]: libpod-390353eb66f1a153efd607c38184c2b46c3d649d49b36d6ac51390d97dac40f9.scope: Deactivated successfully.
Nov 25 09:31:44 compute-0 podman[78411]: 2025-11-25 09:31:44.986526623 +0000 UTC m=+0.358696283 container died 390353eb66f1a153efd607c38184c2b46c3d649d49b36d6ac51390d97dac40f9 (image=quay.io/ceph/ceph:v19, name=hungry_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:44 compute-0 sudo[78645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:31:44 compute-0 sudo[78645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:44 compute-0 sudo[78645]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-76ce23c6a31c13aaa1324aa5e424d1f982ccd6fe0e3da9cf516ce8f18d1a2a45-merged.mount: Deactivated successfully.
Nov 25 09:31:45 compute-0 podman[78411]: 2025-11-25 09:31:45.008381882 +0000 UTC m=+0.380551542 container remove 390353eb66f1a153efd607c38184c2b46c3d649d49b36d6ac51390d97dac40f9 (image=quay.io/ceph/ceph:v19, name=hungry_spence, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:31:45 compute-0 systemd[1]: libpod-conmon-390353eb66f1a153efd607c38184c2b46c3d649d49b36d6ac51390d97dac40f9.scope: Deactivated successfully.
Nov 25 09:31:45 compute-0 ansible-async_wrapper.py[78351]: Module complete (78351)
Nov 25 09:31:45 compute-0 sudo[78679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:31:45 compute-0 sudo[78679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[78679]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[78707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:45 compute-0 sudo[78707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[78707]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[78732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:31:45 compute-0 sudo[78732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[78732]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[78780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:31:45 compute-0 sudo[78780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[78780]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[78805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:31:45 compute-0 sudo[78805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[78805]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[78830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 25 09:31:45 compute-0 sudo[78830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[78830]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:31:45 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:31:45 compute-0 sudo[78855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:31:45 compute-0 sudo[78855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[78855]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[78880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:31:45 compute-0 sudo[78880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[78880]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[78905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:31:45 compute-0 sudo[78905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[78905]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[78930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:45 compute-0 sudo[78930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[78930]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[78955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:31:45 compute-0 sudo[78955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[78955]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[79003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:31:45 compute-0 sudo[79003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[79003]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[79034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:31:45 compute-0 sudo[79034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[79034]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[79076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:31:45 compute-0 sudo[79076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[79076]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:31:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:31:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:31:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:45 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev b7eebb96-1bc0-4485-9eb4-1bcf8daa7e1d (Updating crash deployment (+1 -> 1))
Nov 25 09:31:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 25 09:31:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:31:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 09:31:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:31:45 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:45 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 25 09:31:45 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 25 09:31:45 compute-0 sudo[79101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:45 compute-0 sudo[79101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 sudo[79101]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[79148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmhafmderniotfrrgujsxiceuhgfkxba ; /usr/bin/python3'
Nov 25 09:31:45 compute-0 sudo[79148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:31:45 compute-0 ceph-mon[74207]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:31:45 compute-0 ceph-mon[74207]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:31:45 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:45 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:45 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:45 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:31:45 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 09:31:45 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:45 compute-0 sudo[79151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:45 compute-0 sudo[79151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:45 compute-0 python3[79153]: ansible-ansible.legacy.async_status Invoked with jid=j869286043895.78261 mode=status _async_dir=/root/.ansible_async
Nov 25 09:31:45 compute-0 sudo[79148]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:45 compute-0 sudo[79233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aodqnnsqmcwhugcfegfdcaykfepjwnxc ; /usr/bin/python3'
Nov 25 09:31:45 compute-0 sudo[79233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:31:46 compute-0 podman[79260]: 2025-11-25 09:31:46.002844163 +0000 UTC m=+0.029653971 container create 079adbbbf56ccb6ab388295723a1496e13bad2a257a30accda61641a470dc6df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_lehmann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:31:46 compute-0 python3[79237]: ansible-ansible.legacy.async_status Invoked with jid=j869286043895.78261 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 09:31:46 compute-0 sudo[79233]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:46 compute-0 systemd[1]: Started libpod-conmon-079adbbbf56ccb6ab388295723a1496e13bad2a257a30accda61641a470dc6df.scope.
Nov 25 09:31:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:46 compute-0 podman[79260]: 2025-11-25 09:31:46.056002371 +0000 UTC m=+0.082812189 container init 079adbbbf56ccb6ab388295723a1496e13bad2a257a30accda61641a470dc6df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:46 compute-0 podman[79260]: 2025-11-25 09:31:46.059903015 +0000 UTC m=+0.086712822 container start 079adbbbf56ccb6ab388295723a1496e13bad2a257a30accda61641a470dc6df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:31:46 compute-0 podman[79260]: 2025-11-25 09:31:46.061028195 +0000 UTC m=+0.087838023 container attach 079adbbbf56ccb6ab388295723a1496e13bad2a257a30accda61641a470dc6df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_lehmann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:31:46 compute-0 friendly_lehmann[79273]: 167 167
Nov 25 09:31:46 compute-0 systemd[1]: libpod-079adbbbf56ccb6ab388295723a1496e13bad2a257a30accda61641a470dc6df.scope: Deactivated successfully.
Nov 25 09:31:46 compute-0 conmon[79273]: conmon 079adbbbf56ccb6ab388 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-079adbbbf56ccb6ab388295723a1496e13bad2a257a30accda61641a470dc6df.scope/container/memory.events
Nov 25 09:31:46 compute-0 podman[79260]: 2025-11-25 09:31:46.063521365 +0000 UTC m=+0.090331163 container died 079adbbbf56ccb6ab388295723a1496e13bad2a257a30accda61641a470dc6df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_lehmann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-5de1d0ad72744fd814c9846bc7a42011246786fc7571ca58d466185a0e9cc513-merged.mount: Deactivated successfully.
Nov 25 09:31:46 compute-0 podman[79260]: 2025-11-25 09:31:46.081752542 +0000 UTC m=+0.108562350 container remove 079adbbbf56ccb6ab388295723a1496e13bad2a257a30accda61641a470dc6df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 25 09:31:46 compute-0 podman[79260]: 2025-11-25 09:31:45.991140461 +0000 UTC m=+0.017950289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:31:46 compute-0 systemd[1]: libpod-conmon-079adbbbf56ccb6ab388295723a1496e13bad2a257a30accda61641a470dc6df.scope: Deactivated successfully.
Nov 25 09:31:46 compute-0 systemd[1]: Reloading.
Nov 25 09:31:46 compute-0 systemd-rc-local-generator[79309]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:31:46 compute-0 systemd-sysv-generator[79312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:31:46 compute-0 sudo[79346]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ribssiyptllkfyirtvzdaqknelijlkdx ; /usr/bin/python3'
Nov 25 09:31:46 compute-0 sudo[79346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:31:46 compute-0 systemd[1]: Reloading.
Nov 25 09:31:46 compute-0 systemd-rc-local-generator[79373]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:31:46 compute-0 systemd-sysv-generator[79376]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:31:46 compute-0 python3[79350]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 09:31:46 compute-0 systemd[1]: Starting Ceph crash.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:31:46 compute-0 sudo[79346]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:46 compute-0 podman[79431]: 2025-11-25 09:31:46.663468163 +0000 UTC m=+0.028672391 container create 4991d88b018fc1b1f8ed8c793a4ffa3ab7baafde41b08cd5006242b25f6264bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 25 09:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b5a8df94a4610ca3f5e957961c8f73fe4caa767e0e2960239b9440627831ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b5a8df94a4610ca3f5e957961c8f73fe4caa767e0e2960239b9440627831ce/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b5a8df94a4610ca3f5e957961c8f73fe4caa767e0e2960239b9440627831ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b5a8df94a4610ca3f5e957961c8f73fe4caa767e0e2960239b9440627831ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:46 compute-0 ceph-mon[74207]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:31:46 compute-0 podman[79431]: 2025-11-25 09:31:46.702378933 +0000 UTC m=+0.067583181 container init 4991d88b018fc1b1f8ed8c793a4ffa3ab7baafde41b08cd5006242b25f6264bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Nov 25 09:31:46 compute-0 ceph-mon[74207]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:31:46 compute-0 ceph-mon[74207]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:31:46 compute-0 ceph-mon[74207]: Deploying daemon crash.compute-0 on compute-0
Nov 25 09:31:46 compute-0 podman[79431]: 2025-11-25 09:31:46.706219613 +0000 UTC m=+0.071423840 container start 4991d88b018fc1b1f8ed8c793a4ffa3ab7baafde41b08cd5006242b25f6264bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:46 compute-0 bash[79431]: 4991d88b018fc1b1f8ed8c793a4ffa3ab7baafde41b08cd5006242b25f6264bd
Nov 25 09:31:46 compute-0 podman[79431]: 2025-11-25 09:31:46.650324987 +0000 UTC m=+0.015529235 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:31:46 compute-0 systemd[1]: Started Ceph crash.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:31:46 compute-0 sudo[79151]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:31:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:31:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 25 09:31:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0[79443]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 25 09:31:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:46 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev b7eebb96-1bc0-4485-9eb4-1bcf8daa7e1d (Updating crash deployment (+1 -> 1))
Nov 25 09:31:46 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event b7eebb96-1bc0-4485-9eb4-1bcf8daa7e1d (Updating crash deployment (+1 -> 1)) in 1 seconds
Nov 25 09:31:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 25 09:31:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 25 09:31:46 compute-0 sudo[79471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcuysejtbugbuwryglxiqktoorzitkya ; /usr/bin/python3'
Nov 25 09:31:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:46 compute-0 sudo[79471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:31:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 25 09:31:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:46 compute-0 sudo[79476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:31:46 compute-0 sudo[79476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:46 compute-0 sudo[79476]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:46 compute-0 ceph-mgr[74476]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 09:31:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0[79443]: 2025-11-25T09:31:46.833+0000 7f9d7f666640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 09:31:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0[79443]: 2025-11-25T09:31:46.833+0000 7f9d7f666640 -1 AuthRegistry(0x7f9d780698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 09:31:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0[79443]: 2025-11-25T09:31:46.834+0000 7f9d7f666640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 09:31:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0[79443]: 2025-11-25T09:31:46.834+0000 7f9d7f666640 -1 AuthRegistry(0x7f9d7f664ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 09:31:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0[79443]: 2025-11-25T09:31:46.836+0000 7f9d7d3db640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 25 09:31:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0[79443]: 2025-11-25T09:31:46.836+0000 7f9d7f666640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 25 09:31:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0[79443]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 25 09:31:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0[79443]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 25 09:31:46 compute-0 sudo[79501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:46 compute-0 sudo[79501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:46 compute-0 sudo[79501]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:46 compute-0 python3[79475]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:31:46 compute-0 sudo[79536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:31:46 compute-0 sudo[79536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:46 compute-0 podman[79559]: 2025-11-25 09:31:46.920584578 +0000 UTC m=+0.027784377 container create a66db51ba695bad3736faf3d1693a8ea14015f8d4d055c82f663b014a6c93f29 (image=quay.io/ceph/ceph:v19, name=boring_nash, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 25 09:31:46 compute-0 systemd[1]: Started libpod-conmon-a66db51ba695bad3736faf3d1693a8ea14015f8d4d055c82f663b014a6c93f29.scope.
Nov 25 09:31:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049a872f60e34534e487b80565698a76c2222e9ca8ae3e76c6babb40f8750b16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049a872f60e34534e487b80565698a76c2222e9ca8ae3e76c6babb40f8750b16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049a872f60e34534e487b80565698a76c2222e9ca8ae3e76c6babb40f8750b16/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:46 compute-0 podman[79559]: 2025-11-25 09:31:46.975609174 +0000 UTC m=+0.082808973 container init a66db51ba695bad3736faf3d1693a8ea14015f8d4d055c82f663b014a6c93f29 (image=quay.io/ceph/ceph:v19, name=boring_nash, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:46 compute-0 podman[79559]: 2025-11-25 09:31:46.980251695 +0000 UTC m=+0.087451494 container start a66db51ba695bad3736faf3d1693a8ea14015f8d4d055c82f663b014a6c93f29 (image=quay.io/ceph/ceph:v19, name=boring_nash, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 09:31:46 compute-0 podman[79559]: 2025-11-25 09:31:46.981275946 +0000 UTC m=+0.088475744 container attach a66db51ba695bad3736faf3d1693a8ea14015f8d4d055c82f663b014a6c93f29 (image=quay.io/ceph/ceph:v19, name=boring_nash, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:31:47 compute-0 podman[79559]: 2025-11-25 09:31:46.910099303 +0000 UTC m=+0.017299121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:47 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:31:47 compute-0 podman[79656]: 2025-11-25 09:31:47.256730901 +0000 UTC m=+0.035113222 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:47 compute-0 boring_nash[79574]: 
Nov 25 09:31:47 compute-0 boring_nash[79574]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 09:31:47 compute-0 systemd[1]: libpod-a66db51ba695bad3736faf3d1693a8ea14015f8d4d055c82f663b014a6c93f29.scope: Deactivated successfully.
Nov 25 09:31:47 compute-0 podman[79559]: 2025-11-25 09:31:47.273253698 +0000 UTC m=+0.380453506 container died a66db51ba695bad3736faf3d1693a8ea14015f8d4d055c82f663b014a6c93f29 (image=quay.io/ceph/ceph:v19, name=boring_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-049a872f60e34534e487b80565698a76c2222e9ca8ae3e76c6babb40f8750b16-merged.mount: Deactivated successfully.
Nov 25 09:31:47 compute-0 podman[79559]: 2025-11-25 09:31:47.294567437 +0000 UTC m=+0.401767235 container remove a66db51ba695bad3736faf3d1693a8ea14015f8d4d055c82f663b014a6c93f29 (image=quay.io/ceph/ceph:v19, name=boring_nash, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:31:47 compute-0 systemd[1]: libpod-conmon-a66db51ba695bad3736faf3d1693a8ea14015f8d4d055c82f663b014a6c93f29.scope: Deactivated successfully.
Nov 25 09:31:47 compute-0 sudo[79471]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:47 compute-0 podman[79656]: 2025-11-25 09:31:47.339053305 +0000 UTC m=+0.117435647 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:31:47 compute-0 sudo[79536]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 sudo[79766]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrpsnndsuynqsxjdnzxkvuapikdwzplj ; /usr/bin/python3'
Nov 25 09:31:47 compute-0 sudo[79766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:31:47 compute-0 sudo[79729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:31:47 compute-0 sudo[79729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:47 compute-0 sudo[79729]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 09:31:47 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:47 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 09:31:47 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 09:31:47 compute-0 sudo[79774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:47 compute-0 sudo[79774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:47 compute-0 sudo[79774]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:31:47 compute-0 sudo[79799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:47 compute-0 sudo[79799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:47 compute-0 python3[79772]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:31:47 compute-0 podman[79824]: 2025-11-25 09:31:47.679958951 +0000 UTC m=+0.026190012 container create a49579f8e1c48db88782ecb7ae0f704455874a14b54e0e74e4ea9c85655486b9 (image=quay.io/ceph/ceph:v19, name=blissful_moser, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:47 compute-0 systemd[1]: Started libpod-conmon-a49579f8e1c48db88782ecb7ae0f704455874a14b54e0e74e4ea9c85655486b9.scope.
Nov 25 09:31:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a4792602f05fd7c0466269d65545066859c132e75cc565e257e834216ef9c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a4792602f05fd7c0466269d65545066859c132e75cc565e257e834216ef9c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a4792602f05fd7c0466269d65545066859c132e75cc565e257e834216ef9c6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:47 compute-0 podman[79824]: 2025-11-25 09:31:47.735130342 +0000 UTC m=+0.081361424 container init a49579f8e1c48db88782ecb7ae0f704455874a14b54e0e74e4ea9c85655486b9 (image=quay.io/ceph/ceph:v19, name=blissful_moser, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:31:47 compute-0 podman[79824]: 2025-11-25 09:31:47.739818881 +0000 UTC m=+0.086049943 container start a49579f8e1c48db88782ecb7ae0f704455874a14b54e0e74e4ea9c85655486b9 (image=quay.io/ceph/ceph:v19, name=blissful_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:31:47 compute-0 podman[79824]: 2025-11-25 09:31:47.740816461 +0000 UTC m=+0.087047523 container attach a49579f8e1c48db88782ecb7ae0f704455874a14b54e0e74e4ea9c85655486b9 (image=quay.io/ceph/ceph:v19, name=blissful_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:31:47 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:47 compute-0 podman[79824]: 2025-11-25 09:31:47.669237109 +0000 UTC m=+0.015468189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:47 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 1 completed events
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:31:47 compute-0 podman[79874]: 2025-11-25 09:31:47.868669465 +0000 UTC m=+0.030066078 container create 35e40dfb3598d3cc07737d829f4cdc8030a831b4953d7fd0fe504beaf7dfd912 (image=quay.io/ceph/ceph:v19, name=compassionate_buck, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 systemd[1]: Started libpod-conmon-35e40dfb3598d3cc07737d829f4cdc8030a831b4953d7fd0fe504beaf7dfd912.scope.
Nov 25 09:31:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:47 compute-0 podman[79874]: 2025-11-25 09:31:47.917090953 +0000 UTC m=+0.078487556 container init 35e40dfb3598d3cc07737d829f4cdc8030a831b4953d7fd0fe504beaf7dfd912 (image=quay.io/ceph/ceph:v19, name=compassionate_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:47 compute-0 podman[79874]: 2025-11-25 09:31:47.921583973 +0000 UTC m=+0.082980576 container start 35e40dfb3598d3cc07737d829f4cdc8030a831b4953d7fd0fe504beaf7dfd912 (image=quay.io/ceph/ceph:v19, name=compassionate_buck, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:31:47 compute-0 compassionate_buck[79887]: 167 167
Nov 25 09:31:47 compute-0 podman[79874]: 2025-11-25 09:31:47.92332196 +0000 UTC m=+0.084718563 container attach 35e40dfb3598d3cc07737d829f4cdc8030a831b4953d7fd0fe504beaf7dfd912 (image=quay.io/ceph/ceph:v19, name=compassionate_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Nov 25 09:31:47 compute-0 systemd[1]: libpod-35e40dfb3598d3cc07737d829f4cdc8030a831b4953d7fd0fe504beaf7dfd912.scope: Deactivated successfully.
Nov 25 09:31:47 compute-0 podman[79874]: 2025-11-25 09:31:47.924189154 +0000 UTC m=+0.085585757 container died 35e40dfb3598d3cc07737d829f4cdc8030a831b4953d7fd0fe504beaf7dfd912 (image=quay.io/ceph/ceph:v19, name=compassionate_buck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 25 09:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ea8fa79025a619fa12c54b0fec2431aac97ec4bde2ec8baa84225b0457b5680-merged.mount: Deactivated successfully.
Nov 25 09:31:47 compute-0 podman[79874]: 2025-11-25 09:31:47.941483626 +0000 UTC m=+0.102880228 container remove 35e40dfb3598d3cc07737d829f4cdc8030a831b4953d7fd0fe504beaf7dfd912 (image=quay.io/ceph/ceph:v19, name=compassionate_buck, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:47 compute-0 podman[79874]: 2025-11-25 09:31:47.853884505 +0000 UTC m=+0.015281128 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:47 compute-0 systemd[1]: libpod-conmon-35e40dfb3598d3cc07737d829f4cdc8030a831b4953d7fd0fe504beaf7dfd912.scope: Deactivated successfully.
Nov 25 09:31:47 compute-0 sudo[79799]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:47 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.zcfgby (unknown last config time)...
Nov 25 09:31:47 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.zcfgby (unknown last config time)...
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.zcfgby", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zcfgby", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:31:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:31:47 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:47 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.zcfgby on compute-0
Nov 25 09:31:47 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.zcfgby on compute-0
Nov 25 09:31:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Nov 25 09:31:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2807911582' entity='client.admin' 
Nov 25 09:31:48 compute-0 sudo[79902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:48 compute-0 sudo[79902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:48 compute-0 sudo[79902]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:48 compute-0 systemd[1]: libpod-a49579f8e1c48db88782ecb7ae0f704455874a14b54e0e74e4ea9c85655486b9.scope: Deactivated successfully.
Nov 25 09:31:48 compute-0 podman[79824]: 2025-11-25 09:31:48.037050988 +0000 UTC m=+0.383282049 container died a49579f8e1c48db88782ecb7ae0f704455874a14b54e0e74e4ea9c85655486b9 (image=quay.io/ceph/ceph:v19, name=blissful_moser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 09:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-17a4792602f05fd7c0466269d65545066859c132e75cc565e257e834216ef9c6-merged.mount: Deactivated successfully.
Nov 25 09:31:48 compute-0 podman[79824]: 2025-11-25 09:31:48.065389027 +0000 UTC m=+0.411620088 container remove a49579f8e1c48db88782ecb7ae0f704455874a14b54e0e74e4ea9c85655486b9 (image=quay.io/ceph/ceph:v19, name=blissful_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:31:48 compute-0 systemd[1]: libpod-conmon-a49579f8e1c48db88782ecb7ae0f704455874a14b54e0e74e4ea9c85655486b9.scope: Deactivated successfully.
Nov 25 09:31:48 compute-0 sudo[79929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:31:48 compute-0 sudo[79929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:48 compute-0 sudo[79766]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:48 compute-0 sudo[79987]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edqnzqqrfyruxmxtstkmeagzwxdufqax ; /usr/bin/python3'
Nov 25 09:31:48 compute-0 sudo[79987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:31:48 compute-0 python3[79989]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:31:48 compute-0 podman[80004]: 2025-11-25 09:31:48.308146845 +0000 UTC m=+0.027373933 container create bb98a1005073039bacc2fd57a559c1b3286504fc33185dce519722e396c0e275 (image=quay.io/ceph/ceph:v19, name=upbeat_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:31:48 compute-0 systemd[1]: Started libpod-conmon-bb98a1005073039bacc2fd57a559c1b3286504fc33185dce519722e396c0e275.scope.
Nov 25 09:31:48 compute-0 podman[80014]: 2025-11-25 09:31:48.339174305 +0000 UTC m=+0.032674325 container create cd89f9ccfd9b1004e831079110c9e07e71aec9ebdaacf50d884d2f5dd953f3c5 (image=quay.io/ceph/ceph:v19, name=laughing_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:48 compute-0 podman[80004]: 2025-11-25 09:31:48.347951571 +0000 UTC m=+0.067178669 container init bb98a1005073039bacc2fd57a559c1b3286504fc33185dce519722e396c0e275 (image=quay.io/ceph/ceph:v19, name=upbeat_cohen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 09:31:48 compute-0 podman[80004]: 2025-11-25 09:31:48.354320497 +0000 UTC m=+0.073547574 container start bb98a1005073039bacc2fd57a559c1b3286504fc33185dce519722e396c0e275 (image=quay.io/ceph/ceph:v19, name=upbeat_cohen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 09:31:48 compute-0 podman[80004]: 2025-11-25 09:31:48.355924751 +0000 UTC m=+0.075151849 container attach bb98a1005073039bacc2fd57a559c1b3286504fc33185dce519722e396c0e275 (image=quay.io/ceph/ceph:v19, name=upbeat_cohen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 09:31:48 compute-0 upbeat_cohen[80025]: 167 167
Nov 25 09:31:48 compute-0 podman[80004]: 2025-11-25 09:31:48.357143579 +0000 UTC m=+0.076370656 container died bb98a1005073039bacc2fd57a559c1b3286504fc33185dce519722e396c0e275 (image=quay.io/ceph/ceph:v19, name=upbeat_cohen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:48 compute-0 systemd[1]: Started libpod-conmon-cd89f9ccfd9b1004e831079110c9e07e71aec9ebdaacf50d884d2f5dd953f3c5.scope.
Nov 25 09:31:48 compute-0 systemd[1]: libpod-bb98a1005073039bacc2fd57a559c1b3286504fc33185dce519722e396c0e275.scope: Deactivated successfully.
Nov 25 09:31:48 compute-0 podman[80004]: 2025-11-25 09:31:48.374132143 +0000 UTC m=+0.093359221 container remove bb98a1005073039bacc2fd57a559c1b3286504fc33185dce519722e396c0e275 (image=quay.io/ceph/ceph:v19, name=upbeat_cohen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:48 compute-0 podman[80004]: 2025-11-25 09:31:48.296489901 +0000 UTC m=+0.015716999 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3db74c0bafdf003d052fef022efc9c903d59efab84306aa2703443a82b32393c-merged.mount: Deactivated successfully.
Nov 25 09:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0f6c3d7a15fcb6f75720dbf7613d567c7542bd51cdb7f743ed480ce305082e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0f6c3d7a15fcb6f75720dbf7613d567c7542bd51cdb7f743ed480ce305082e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0f6c3d7a15fcb6f75720dbf7613d567c7542bd51cdb7f743ed480ce305082e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:48 compute-0 podman[80014]: 2025-11-25 09:31:48.387750365 +0000 UTC m=+0.081250385 container init cd89f9ccfd9b1004e831079110c9e07e71aec9ebdaacf50d884d2f5dd953f3c5 (image=quay.io/ceph/ceph:v19, name=laughing_poincare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:48 compute-0 podman[80014]: 2025-11-25 09:31:48.394168904 +0000 UTC m=+0.087668914 container start cd89f9ccfd9b1004e831079110c9e07e71aec9ebdaacf50d884d2f5dd953f3c5 (image=quay.io/ceph/ceph:v19, name=laughing_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:48 compute-0 podman[80014]: 2025-11-25 09:31:48.395303734 +0000 UTC m=+0.088803744 container attach cd89f9ccfd9b1004e831079110c9e07e71aec9ebdaacf50d884d2f5dd953f3c5 (image=quay.io/ceph/ceph:v19, name=laughing_poincare, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:31:48 compute-0 systemd[1]: libpod-conmon-bb98a1005073039bacc2fd57a559c1b3286504fc33185dce519722e396c0e275.scope: Deactivated successfully.
Nov 25 09:31:48 compute-0 sudo[79929]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:31:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:31:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:31:48 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:31:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:31:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:31:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:48 compute-0 podman[80014]: 2025-11-25 09:31:48.328204156 +0000 UTC m=+0.021704185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:48 compute-0 sudo[80048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:31:48 compute-0 sudo[80048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:48 compute-0 sudo[80048]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Nov 25 09:31:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933074026' entity='client.admin' 
Nov 25 09:31:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:31:48 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:31:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:31:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:31:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:48 compute-0 systemd[1]: libpod-cd89f9ccfd9b1004e831079110c9e07e71aec9ebdaacf50d884d2f5dd953f3c5.scope: Deactivated successfully.
Nov 25 09:31:48 compute-0 conmon[80034]: conmon cd89f9ccfd9b1004e831 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd89f9ccfd9b1004e831079110c9e07e71aec9ebdaacf50d884d2f5dd953f3c5.scope/container/memory.events
Nov 25 09:31:48 compute-0 podman[80014]: 2025-11-25 09:31:48.679275052 +0000 UTC m=+0.372775062 container died cd89f9ccfd9b1004e831079110c9e07e71aec9ebdaacf50d884d2f5dd953f3c5 (image=quay.io/ceph/ceph:v19, name=laughing_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:31:48 compute-0 sudo[80094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:31:48 compute-0 sudo[80094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:48 compute-0 sudo[80094]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:48 compute-0 ceph-mgr[74476]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:31:48 compute-0 ceph-mon[74207]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 09:31:48 compute-0 ceph-mon[74207]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zcfgby", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2807911582' entity='client.admin' 
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3933074026' entity='client.admin' 
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:31:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a0f6c3d7a15fcb6f75720dbf7613d567c7542bd51cdb7f743ed480ce305082e-merged.mount: Deactivated successfully.
Nov 25 09:31:49 compute-0 podman[80014]: 2025-11-25 09:31:49.101471698 +0000 UTC m=+0.794971707 container remove cd89f9ccfd9b1004e831079110c9e07e71aec9ebdaacf50d884d2f5dd953f3c5 (image=quay.io/ceph/ceph:v19, name=laughing_poincare, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:49 compute-0 sudo[79987]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:49 compute-0 systemd[1]: libpod-conmon-cd89f9ccfd9b1004e831079110c9e07e71aec9ebdaacf50d884d2f5dd953f3c5.scope: Deactivated successfully.
Nov 25 09:31:49 compute-0 sudo[80154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ootvgzphadijweylzdsxydjuynuwfksw ; /usr/bin/python3'
Nov 25 09:31:49 compute-0 sudo[80154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:31:49 compute-0 python3[80156]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:31:49 compute-0 podman[80157]: 2025-11-25 09:31:49.395591476 +0000 UTC m=+0.026430985 container create 8c9f19bb88d17c8f014cb7f0bc41c1a898708168a8a325ec02e24f34cdd147bf (image=quay.io/ceph/ceph:v19, name=serene_archimedes, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:31:49 compute-0 systemd[1]: Started libpod-conmon-8c9f19bb88d17c8f014cb7f0bc41c1a898708168a8a325ec02e24f34cdd147bf.scope.
Nov 25 09:31:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c9c2ba84b138225eb498bb843333eb5a304b3df3b78abc4357e64f038744be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c9c2ba84b138225eb498bb843333eb5a304b3df3b78abc4357e64f038744be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c9c2ba84b138225eb498bb843333eb5a304b3df3b78abc4357e64f038744be/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:49 compute-0 podman[80157]: 2025-11-25 09:31:49.442019468 +0000 UTC m=+0.072858987 container init 8c9f19bb88d17c8f014cb7f0bc41c1a898708168a8a325ec02e24f34cdd147bf (image=quay.io/ceph/ceph:v19, name=serene_archimedes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Nov 25 09:31:49 compute-0 podman[80157]: 2025-11-25 09:31:49.445667003 +0000 UTC m=+0.076506502 container start 8c9f19bb88d17c8f014cb7f0bc41c1a898708168a8a325ec02e24f34cdd147bf (image=quay.io/ceph/ceph:v19, name=serene_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:49 compute-0 podman[80157]: 2025-11-25 09:31:49.447026276 +0000 UTC m=+0.077865775 container attach 8c9f19bb88d17c8f014cb7f0bc41c1a898708168a8a325ec02e24f34cdd147bf (image=quay.io/ceph/ceph:v19, name=serene_archimedes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:49 compute-0 podman[80157]: 2025-11-25 09:31:49.384716787 +0000 UTC m=+0.015556296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:49 compute-0 ansible-async_wrapper.py[78345]: Done in kid B.
Nov 25 09:31:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Nov 25 09:31:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1413920672' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 25 09:31:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 25 09:31:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 09:31:49 compute-0 ceph-mon[74207]: Reconfiguring mgr.compute-0.zcfgby (unknown last config time)...
Nov 25 09:31:49 compute-0 ceph-mon[74207]: Reconfiguring daemon mgr.compute-0.zcfgby on compute-0
Nov 25 09:31:49 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1413920672' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 25 09:31:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1413920672' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 25 09:31:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 25 09:31:49 compute-0 serene_archimedes[80169]: set require_min_compat_client to mimic
Nov 25 09:31:49 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 25 09:31:49 compute-0 systemd[1]: libpod-8c9f19bb88d17c8f014cb7f0bc41c1a898708168a8a325ec02e24f34cdd147bf.scope: Deactivated successfully.
Nov 25 09:31:49 compute-0 podman[80194]: 2025-11-25 09:31:49.919641803 +0000 UTC m=+0.016671108 container died 8c9f19bb88d17c8f014cb7f0bc41c1a898708168a8a325ec02e24f34cdd147bf (image=quay.io/ceph/ceph:v19, name=serene_archimedes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3c9c2ba84b138225eb498bb843333eb5a304b3df3b78abc4357e64f038744be-merged.mount: Deactivated successfully.
Nov 25 09:31:49 compute-0 podman[80194]: 2025-11-25 09:31:49.937470191 +0000 UTC m=+0.034499486 container remove 8c9f19bb88d17c8f014cb7f0bc41c1a898708168a8a325ec02e24f34cdd147bf (image=quay.io/ceph/ceph:v19, name=serene_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:31:49 compute-0 systemd[1]: libpod-conmon-8c9f19bb88d17c8f014cb7f0bc41c1a898708168a8a325ec02e24f34cdd147bf.scope: Deactivated successfully.
Nov 25 09:31:49 compute-0 sudo[80154]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:50 compute-0 sudo[80229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laesxfjrhvpbjbryoswoltfkpqtnxtkc ; /usr/bin/python3'
Nov 25 09:31:50 compute-0 sudo[80229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:31:50 compute-0 python3[80231]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:31:50 compute-0 podman[80232]: 2025-11-25 09:31:50.427232693 +0000 UTC m=+0.027737898 container create b8d34212faae43e2a8bc9d64e1f20165b746104aef1473ba9a53a2888a158420 (image=quay.io/ceph/ceph:v19, name=condescending_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:31:50 compute-0 systemd[1]: Started libpod-conmon-b8d34212faae43e2a8bc9d64e1f20165b746104aef1473ba9a53a2888a158420.scope.
Nov 25 09:31:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d40b71c6f61383f57dc172b203d8f814053deae92e64ff02964f9b065bfa516/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d40b71c6f61383f57dc172b203d8f814053deae92e64ff02964f9b065bfa516/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d40b71c6f61383f57dc172b203d8f814053deae92e64ff02964f9b065bfa516/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:50 compute-0 podman[80232]: 2025-11-25 09:31:50.485127299 +0000 UTC m=+0.085632505 container init b8d34212faae43e2a8bc9d64e1f20165b746104aef1473ba9a53a2888a158420 (image=quay.io/ceph/ceph:v19, name=condescending_engelbart, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:31:50 compute-0 podman[80232]: 2025-11-25 09:31:50.488781428 +0000 UTC m=+0.089286624 container start b8d34212faae43e2a8bc9d64e1f20165b746104aef1473ba9a53a2888a158420 (image=quay.io/ceph/ceph:v19, name=condescending_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:31:50 compute-0 podman[80232]: 2025-11-25 09:31:50.490155818 +0000 UTC m=+0.090661024 container attach b8d34212faae43e2a8bc9d64e1f20165b746104aef1473ba9a53a2888a158420 (image=quay.io/ceph/ceph:v19, name=condescending_engelbart, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:31:50 compute-0 podman[80232]: 2025-11-25 09:31:50.416253557 +0000 UTC m=+0.016758783 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:50 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:50 compute-0 sudo[80268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:31:50 compute-0 sudo[80268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:50 compute-0 sudo[80268]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:50 compute-0 ceph-mgr[74476]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 09:31:50 compute-0 sudo[80293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Nov 25 09:31:50 compute-0 sudo[80293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:50 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1413920672' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 25 09:31:50 compute-0 ceph-mon[74207]: osdmap e3: 0 total, 0 up, 0 in
Nov 25 09:31:51 compute-0 sudo[80293]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 25 09:31:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 25 09:31:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 25 09:31:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 25 09:31:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:51 compute-0 ceph-mgr[74476]: [cephadm INFO root] Added host compute-0
Nov 25 09:31:51 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 25 09:31:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:31:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:31:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:31:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:31:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:51 compute-0 sudo[80336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:31:51 compute-0 sudo[80336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:31:51 compute-0 sudo[80336]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:51 compute-0 ceph-mon[74207]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:31:51 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:51 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:51 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:51 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:51 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:31:51 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:31:51 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:52 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Nov 25 09:31:52 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Nov 25 09:31:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:31:52 compute-0 ceph-mgr[74476]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 25 09:31:52 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:31:52 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 25 09:31:52 compute-0 ceph-mon[74207]: Added host compute-0
Nov 25 09:31:52 compute-0 ceph-mon[74207]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 25 09:31:53 compute-0 ceph-mon[74207]: Deploying cephadm binary to compute-1
Nov 25 09:31:53 compute-0 ceph-mon[74207]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:31:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 25 09:31:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:54 compute-0 ceph-mgr[74476]: [cephadm INFO root] Added host compute-1
Nov 25 09:31:54 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Added host compute-1
Nov 25 09:31:54 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:31:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:31:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:31:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:55 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Nov 25 09:31:55 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Nov 25 09:31:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:55 compute-0 ceph-mon[74207]: Added host compute-1
Nov 25 09:31:55 compute-0 ceph-mon[74207]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:31:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:31:56 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:56 compute-0 ceph-mon[74207]: Deploying cephadm binary to compute-2
Nov 25 09:31:56 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:56 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:31:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:31:57 compute-0 ceph-mon[74207]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:31:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 25 09:31:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: [cephadm INFO root] Added host compute-2
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Added host compute-2
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 25 09:31:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 25 09:31:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 25 09:31:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 25 09:31:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 25 09:31:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Nov 25 09:31:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:58 compute-0 condescending_engelbart[80244]: Added host 'compute-0' with addr '192.168.122.100'
Nov 25 09:31:58 compute-0 condescending_engelbart[80244]: Added host 'compute-1' with addr '192.168.122.101'
Nov 25 09:31:58 compute-0 condescending_engelbart[80244]: Added host 'compute-2' with addr '192.168.122.102'
Nov 25 09:31:58 compute-0 condescending_engelbart[80244]: Scheduled mon update...
Nov 25 09:31:58 compute-0 condescending_engelbart[80244]: Scheduled mgr update...
Nov 25 09:31:58 compute-0 condescending_engelbart[80244]: Scheduled osd.default_drive_group update...
Nov 25 09:31:58 compute-0 systemd[1]: libpod-b8d34212faae43e2a8bc9d64e1f20165b746104aef1473ba9a53a2888a158420.scope: Deactivated successfully.
Nov 25 09:31:58 compute-0 podman[80232]: 2025-11-25 09:31:58.625199764 +0000 UTC m=+8.225704970 container died b8d34212faae43e2a8bc9d64e1f20165b746104aef1473ba9a53a2888a158420 (image=quay.io/ceph/ceph:v19, name=condescending_engelbart, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d40b71c6f61383f57dc172b203d8f814053deae92e64ff02964f9b065bfa516-merged.mount: Deactivated successfully.
Nov 25 09:31:58 compute-0 podman[80232]: 2025-11-25 09:31:58.644396302 +0000 UTC m=+8.244901507 container remove b8d34212faae43e2a8bc9d64e1f20165b746104aef1473ba9a53a2888a158420 (image=quay.io/ceph/ceph:v19, name=condescending_engelbart, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:58 compute-0 systemd[1]: libpod-conmon-b8d34212faae43e2a8bc9d64e1f20165b746104aef1473ba9a53a2888a158420.scope: Deactivated successfully.
Nov 25 09:31:58 compute-0 sudo[80229]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:58 compute-0 sudo[80395]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhwjfcybkvuuokmwrbkkkktlircqariz ; /usr/bin/python3'
Nov 25 09:31:58 compute-0 sudo[80395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:31:58 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:31:58 compute-0 python3[80397]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:31:58 compute-0 podman[80399]: 2025-11-25 09:31:58.967302706 +0000 UTC m=+0.026191494 container create 2e5d642130ae6c24615b039b52fea5700d5104b42af6e309f9717e9abf91963a (image=quay.io/ceph/ceph:v19, name=brave_roentgen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:31:58 compute-0 systemd[1]: Started libpod-conmon-2e5d642130ae6c24615b039b52fea5700d5104b42af6e309f9717e9abf91963a.scope.
Nov 25 09:31:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8641e42ff9bbad112b9ea96455323f1c47edfd9de934168c48c11bae1b42d4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8641e42ff9bbad112b9ea96455323f1c47edfd9de934168c48c11bae1b42d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8641e42ff9bbad112b9ea96455323f1c47edfd9de934168c48c11bae1b42d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:59 compute-0 podman[80399]: 2025-11-25 09:31:59.024723128 +0000 UTC m=+0.083611936 container init 2e5d642130ae6c24615b039b52fea5700d5104b42af6e309f9717e9abf91963a (image=quay.io/ceph/ceph:v19, name=brave_roentgen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:59 compute-0 podman[80399]: 2025-11-25 09:31:59.029556761 +0000 UTC m=+0.088445549 container start 2e5d642130ae6c24615b039b52fea5700d5104b42af6e309f9717e9abf91963a (image=quay.io/ceph/ceph:v19, name=brave_roentgen, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 25 09:31:59 compute-0 podman[80399]: 2025-11-25 09:31:59.030655041 +0000 UTC m=+0.089543829 container attach 2e5d642130ae6c24615b039b52fea5700d5104b42af6e309f9717e9abf91963a (image=quay.io/ceph/ceph:v19, name=brave_roentgen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:31:59 compute-0 podman[80399]: 2025-11-25 09:31:58.956331755 +0000 UTC m=+0.015220553 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:31:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 25 09:31:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305128803' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:31:59 compute-0 brave_roentgen[80412]: 
Nov 25 09:31:59 compute-0 brave_roentgen[80412]: {"fsid":"af1c9ae3-08d7-5547-a53d-2cccf7c6ef90","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":41,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-11-25T09:31:16:071954+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-25T09:31:16.073226+0000","services":{}},"progress_events":{}}
Nov 25 09:31:59 compute-0 systemd[1]: libpod-2e5d642130ae6c24615b039b52fea5700d5104b42af6e309f9717e9abf91963a.scope: Deactivated successfully.
Nov 25 09:31:59 compute-0 conmon[80412]: conmon 2e5d642130ae6c24615b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e5d642130ae6c24615b039b52fea5700d5104b42af6e309f9717e9abf91963a.scope/container/memory.events
Nov 25 09:31:59 compute-0 podman[80399]: 2025-11-25 09:31:59.352808716 +0000 UTC m=+0.411697504 container died 2e5d642130ae6c24615b039b52fea5700d5104b42af6e309f9717e9abf91963a (image=quay.io/ceph/ceph:v19, name=brave_roentgen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:31:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f8641e42ff9bbad112b9ea96455323f1c47edfd9de934168c48c11bae1b42d4-merged.mount: Deactivated successfully.
Nov 25 09:31:59 compute-0 podman[80399]: 2025-11-25 09:31:59.373537081 +0000 UTC m=+0.432425869 container remove 2e5d642130ae6c24615b039b52fea5700d5104b42af6e309f9717e9abf91963a (image=quay.io/ceph/ceph:v19, name=brave_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:31:59 compute-0 systemd[1]: libpod-conmon-2e5d642130ae6c24615b039b52fea5700d5104b42af6e309f9717e9abf91963a.scope: Deactivated successfully.
Nov 25 09:31:59 compute-0 sudo[80395]: pam_unix(sudo:session): session closed for user root
Nov 25 09:31:59 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:59 compute-0 ceph-mon[74207]: Added host compute-2
Nov 25 09:31:59 compute-0 ceph-mon[74207]: Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 25 09:31:59 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:59 compute-0 ceph-mon[74207]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 25 09:31:59 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:59 compute-0 ceph-mon[74207]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 09:31:59 compute-0 ceph-mon[74207]: Marking host: compute-1 for OSDSpec preview refresh.
Nov 25 09:31:59 compute-0 ceph-mon[74207]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 25 09:31:59 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:31:59 compute-0 ceph-mon[74207]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:31:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3305128803' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:32:00 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:01 compute-0 ceph-mon[74207]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:32:02 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:02 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:32:02 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:32:02 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:32:02 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:32:02 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:32:02 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:32:03 compute-0 ceph-mon[74207]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:04 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:05 compute-0 ceph-mon[74207]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:06 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:32:07 compute-0 ceph-mon[74207]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:08 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:09 compute-0 ceph-mon[74207]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:10 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:11 compute-0 ceph-mon[74207]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:32:12 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:12 compute-0 ceph-mon[74207]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:32:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:32:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:32:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:32:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 25 09:32:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:32:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:32:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:32:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:32:14 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:32:14 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:32:14 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:14 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:32:14 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:32:15 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:15 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:15 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:15 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:15 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:32:15 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:15 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:32:15 compute-0 ceph-mon[74207]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:32:15 compute-0 ceph-mon[74207]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:15 compute-0 ceph-mon[74207]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:32:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:32:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:32:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:32:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:32:15.843+0000 7fdab6995640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: service_name: mon
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: placement:
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   hosts:
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   - compute-0
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   - compute-1
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   - compute-2
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:32:15.843+0000 7fdab6995640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: service_name: mgr
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: placement:
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   hosts:
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   - compute-0
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   - compute-1
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   - compute-2
Nov 25 09:32:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev c496a917-fb64-4479-8ea9-8be7bac16643 (Updating crash deployment (+1 -> 2))
Nov 25 09:32:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 25 09:32:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:32:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 09:32:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:32:15 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Nov 25 09:32:15 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Nov 25 09:32:16 compute-0 ceph-mon[74207]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:32:16 compute-0 ceph-mon[74207]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:32:16 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:16 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:16 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:16 compute-0 ceph-mon[74207]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 25 09:32:16 compute-0 ceph-mon[74207]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:16 compute-0 ceph-mon[74207]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 25 09:32:16 compute-0 ceph-mon[74207]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:16 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:32:16 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 09:32:16 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:16 compute-0 ceph-mon[74207]: Deploying daemon crash.compute-1 on compute-1
Nov 25 09:32:16 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 25 09:32:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:32:17 compute-0 ceph-mon[74207]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 25 09:32:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:32:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:32:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 25 09:32:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:17 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev c496a917-fb64-4479-8ea9-8be7bac16643 (Updating crash deployment (+1 -> 2))
Nov 25 09:32:17 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event c496a917-fb64-4479-8ea9-8be7bac16643 (Updating crash deployment (+1 -> 2)) in 2 seconds
Nov 25 09:32:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 25 09:32:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:32:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:32:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:32:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:32:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:32:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:32:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:32:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:32:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:17 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 2 completed events
Nov 25 09:32:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:32:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:17 compute-0 sudo[80446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:17 compute-0 sudo[80446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:17 compute-0 sudo[80446]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:17 compute-0 sudo[80471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:32:17 compute-0 sudo[80471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:18 compute-0 podman[80526]: 2025-11-25 09:32:18.217047994 +0000 UTC m=+0.027039453 container create 0015fab843ebcfac1589cf5c09eb74174e40392dbe0fd6b70a63aaaf8af07485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:32:18 compute-0 systemd[1]: Started libpod-conmon-0015fab843ebcfac1589cf5c09eb74174e40392dbe0fd6b70a63aaaf8af07485.scope.
Nov 25 09:32:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:18 compute-0 podman[80526]: 2025-11-25 09:32:18.26213694 +0000 UTC m=+0.072128420 container init 0015fab843ebcfac1589cf5c09eb74174e40392dbe0fd6b70a63aaaf8af07485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_curie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:18 compute-0 podman[80526]: 2025-11-25 09:32:18.266376873 +0000 UTC m=+0.076368332 container start 0015fab843ebcfac1589cf5c09eb74174e40392dbe0fd6b70a63aaaf8af07485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:32:18 compute-0 podman[80526]: 2025-11-25 09:32:18.267785628 +0000 UTC m=+0.077777087 container attach 0015fab843ebcfac1589cf5c09eb74174e40392dbe0fd6b70a63aaaf8af07485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_curie, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:18 compute-0 lucid_curie[80539]: 167 167
Nov 25 09:32:18 compute-0 systemd[1]: libpod-0015fab843ebcfac1589cf5c09eb74174e40392dbe0fd6b70a63aaaf8af07485.scope: Deactivated successfully.
Nov 25 09:32:18 compute-0 podman[80544]: 2025-11-25 09:32:18.299076384 +0000 UTC m=+0.016346425 container died 0015fab843ebcfac1589cf5c09eb74174e40392dbe0fd6b70a63aaaf8af07485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_curie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 25 09:32:18 compute-0 podman[80526]: 2025-11-25 09:32:18.205590957 +0000 UTC m=+0.015582416 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-48f9a142ecd683579f64fc12a49d0e5a030206be9bbd23b20d84613d12a5d37a-merged.mount: Deactivated successfully.
Nov 25 09:32:18 compute-0 podman[80544]: 2025-11-25 09:32:18.315325776 +0000 UTC m=+0.032595817 container remove 0015fab843ebcfac1589cf5c09eb74174e40392dbe0fd6b70a63aaaf8af07485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_curie, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:32:18 compute-0 systemd[1]: libpod-conmon-0015fab843ebcfac1589cf5c09eb74174e40392dbe0fd6b70a63aaaf8af07485.scope: Deactivated successfully.
Nov 25 09:32:18 compute-0 podman[80562]: 2025-11-25 09:32:18.427380477 +0000 UTC m=+0.027039332 container create ab5a6a7dffe0d7abc2d72b5a27360b930f37aab5e3f79c9dc9f2d20cdd3ca494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ride, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:32:18 compute-0 systemd[1]: Started libpod-conmon-ab5a6a7dffe0d7abc2d72b5a27360b930f37aab5e3f79c9dc9f2d20cdd3ca494.scope.
Nov 25 09:32:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484cfc8ac9741ccc4eda0b2a58b5029d0a8ab5d7bfd29ddac0eb53e7f24ad4f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484cfc8ac9741ccc4eda0b2a58b5029d0a8ab5d7bfd29ddac0eb53e7f24ad4f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484cfc8ac9741ccc4eda0b2a58b5029d0a8ab5d7bfd29ddac0eb53e7f24ad4f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484cfc8ac9741ccc4eda0b2a58b5029d0a8ab5d7bfd29ddac0eb53e7f24ad4f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484cfc8ac9741ccc4eda0b2a58b5029d0a8ab5d7bfd29ddac0eb53e7f24ad4f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:18 compute-0 podman[80562]: 2025-11-25 09:32:18.490787826 +0000 UTC m=+0.090446690 container init ab5a6a7dffe0d7abc2d72b5a27360b930f37aab5e3f79c9dc9f2d20cdd3ca494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ride, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:18 compute-0 podman[80562]: 2025-11-25 09:32:18.496940033 +0000 UTC m=+0.096598888 container start ab5a6a7dffe0d7abc2d72b5a27360b930f37aab5e3f79c9dc9f2d20cdd3ca494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ride, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:18 compute-0 podman[80562]: 2025-11-25 09:32:18.4980953 +0000 UTC m=+0.097754155 container attach ab5a6a7dffe0d7abc2d72b5a27360b930f37aab5e3f79c9dc9f2d20cdd3ca494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ride, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:32:18 compute-0 podman[80562]: 2025-11-25 09:32:18.415701501 +0000 UTC m=+0.015360376 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:18 compute-0 epic_ride[80576]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:32:18 compute-0 epic_ride[80576]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:18 compute-0 epic_ride[80576]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:18 compute-0 epic_ride[80576]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 26fb5eac-2c31-4a21-bbae-433f98108699
Nov 25 09:32:18 compute-0 ceph-mon[74207]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:18 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:18 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:18 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:18 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:18 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:32:18 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:32:18 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:18 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:32:18 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:18 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "54eaf85f-4a96-4481-89e2-59a1f01c0d63"} v 0)
Nov 25 09:32:19 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/255611691' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "54eaf85f-4a96-4481-89e2-59a1f01c0d63"}]: dispatch
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "26fb5eac-2c31-4a21-bbae-433f98108699"} v 0)
Nov 25 09:32:19 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4031988559' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "26fb5eac-2c31-4a21-bbae-433f98108699"}]: dispatch
Nov 25 09:32:19 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/255611691' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "54eaf85f-4a96-4481-89e2-59a1f01c0d63"}]': finished
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 09:32:19 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 25 09:32:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:19 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 09:32:19 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4031988559' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "26fb5eac-2c31-4a21-bbae-433f98108699"}]': finished
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 25 09:32:19 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 25 09:32:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 25 09:32:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:19 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 09:32:19 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 09:32:19 compute-0 lvm[80637]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:32:19 compute-0 lvm[80637]: VG ceph_vg0 finished
Nov 25 09:32:19 compute-0 epic_ride[80576]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 25 09:32:19 compute-0 epic_ride[80576]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 25 09:32:19 compute-0 epic_ride[80576]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 09:32:19 compute-0 epic_ride[80576]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:19 compute-0 epic_ride[80576]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Nov 25 09:32:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/736719887' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 09:32:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Nov 25 09:32:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3308712030' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 09:32:19 compute-0 epic_ride[80576]:  stderr: got monmap epoch 1
Nov 25 09:32:19 compute-0 epic_ride[80576]: --> Creating keyring file for osd.1
Nov 25 09:32:19 compute-0 epic_ride[80576]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 25 09:32:19 compute-0 epic_ride[80576]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 25 09:32:19 compute-0 epic_ride[80576]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 26fb5eac-2c31-4a21-bbae-433f98108699 --setuser ceph --setgroup ceph
Nov 25 09:32:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/255611691' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "54eaf85f-4a96-4481-89e2-59a1f01c0d63"}]: dispatch
Nov 25 09:32:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4031988559' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "26fb5eac-2c31-4a21-bbae-433f98108699"}]: dispatch
Nov 25 09:32:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/255611691' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "54eaf85f-4a96-4481-89e2-59a1f01c0d63"}]': finished
Nov 25 09:32:19 compute-0 ceph-mon[74207]: osdmap e4: 1 total, 0 up, 1 in
Nov 25 09:32:19 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4031988559' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "26fb5eac-2c31-4a21-bbae-433f98108699"}]': finished
Nov 25 09:32:19 compute-0 ceph-mon[74207]: osdmap e5: 2 total, 0 up, 2 in
Nov 25 09:32:19 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:19 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/736719887' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 09:32:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3308712030' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 09:32:20 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 25 09:32:20 compute-0 ceph-mon[74207]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:20 compute-0 ceph-mon[74207]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 25 09:32:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:22 compute-0 epic_ride[80576]:  stderr: 2025-11-25T09:32:19.621+0000 7f7bf86d8740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Nov 25 09:32:22 compute-0 epic_ride[80576]:  stderr: 2025-11-25T09:32:19.883+0000 7f7bf86d8740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 25 09:32:22 compute-0 epic_ride[80576]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 25 09:32:22 compute-0 epic_ride[80576]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 09:32:22 compute-0 epic_ride[80576]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 25 09:32:22 compute-0 epic_ride[80576]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:22 compute-0 epic_ride[80576]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:22 compute-0 epic_ride[80576]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 09:32:22 compute-0 epic_ride[80576]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 09:32:22 compute-0 epic_ride[80576]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 25 09:32:22 compute-0 epic_ride[80576]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 25 09:32:22 compute-0 systemd[1]: libpod-ab5a6a7dffe0d7abc2d72b5a27360b930f37aab5e3f79c9dc9f2d20cdd3ca494.scope: Deactivated successfully.
Nov 25 09:32:22 compute-0 systemd[1]: libpod-ab5a6a7dffe0d7abc2d72b5a27360b930f37aab5e3f79c9dc9f2d20cdd3ca494.scope: Consumed 1.402s CPU time.
Nov 25 09:32:22 compute-0 podman[80562]: 2025-11-25 09:32:22.44355854 +0000 UTC m=+4.043217394 container died ab5a6a7dffe0d7abc2d72b5a27360b930f37aab5e3f79c9dc9f2d20cdd3ca494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ride, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-484cfc8ac9741ccc4eda0b2a58b5029d0a8ab5d7bfd29ddac0eb53e7f24ad4f5-merged.mount: Deactivated successfully.
Nov 25 09:32:22 compute-0 podman[80562]: 2025-11-25 09:32:22.470674864 +0000 UTC m=+4.070333718 container remove ab5a6a7dffe0d7abc2d72b5a27360b930f37aab5e3f79c9dc9f2d20cdd3ca494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ride, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:32:22 compute-0 systemd[1]: libpod-conmon-ab5a6a7dffe0d7abc2d72b5a27360b930f37aab5e3f79c9dc9f2d20cdd3ca494.scope: Deactivated successfully.
Nov 25 09:32:22 compute-0 sudo[80471]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:22 compute-0 sudo[81554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:22 compute-0 sudo[81554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:22 compute-0 sudo[81554]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:22 compute-0 sudo[81579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:32:22 compute-0 sudo[81579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:32:22 compute-0 podman[81634]: 2025-11-25 09:32:22.849832454 +0000 UTC m=+0.025607615 container create 34111c65650aa0ac2b1513f94a170c02d12d14a65844eeaab62b4431098c09fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:32:22 compute-0 ceph-mon[74207]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:22 compute-0 systemd[1]: Started libpod-conmon-34111c65650aa0ac2b1513f94a170c02d12d14a65844eeaab62b4431098c09fb.scope.
Nov 25 09:32:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:22 compute-0 podman[81634]: 2025-11-25 09:32:22.894991022 +0000 UTC m=+0.070766204 container init 34111c65650aa0ac2b1513f94a170c02d12d14a65844eeaab62b4431098c09fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:32:22 compute-0 podman[81634]: 2025-11-25 09:32:22.899543963 +0000 UTC m=+0.075319123 container start 34111c65650aa0ac2b1513f94a170c02d12d14a65844eeaab62b4431098c09fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_aryabhata, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:22 compute-0 podman[81634]: 2025-11-25 09:32:22.900675602 +0000 UTC m=+0.076450763 container attach 34111c65650aa0ac2b1513f94a170c02d12d14a65844eeaab62b4431098c09fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_aryabhata, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:32:22 compute-0 frosty_aryabhata[81647]: 167 167
Nov 25 09:32:22 compute-0 systemd[1]: libpod-34111c65650aa0ac2b1513f94a170c02d12d14a65844eeaab62b4431098c09fb.scope: Deactivated successfully.
Nov 25 09:32:22 compute-0 podman[81634]: 2025-11-25 09:32:22.90303833 +0000 UTC m=+0.078813512 container died 34111c65650aa0ac2b1513f94a170c02d12d14a65844eeaab62b4431098c09fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_aryabhata, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-05c547ac2b81a6c335584e6b7f10d54ed51b34c2178cb6a7132bf7476d5d1bed-merged.mount: Deactivated successfully.
Nov 25 09:32:22 compute-0 podman[81634]: 2025-11-25 09:32:22.920826535 +0000 UTC m=+0.096601696 container remove 34111c65650aa0ac2b1513f94a170c02d12d14a65844eeaab62b4431098c09fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:22 compute-0 podman[81634]: 2025-11-25 09:32:22.839366713 +0000 UTC m=+0.015141894 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:22 compute-0 systemd[1]: libpod-conmon-34111c65650aa0ac2b1513f94a170c02d12d14a65844eeaab62b4431098c09fb.scope: Deactivated successfully.
Nov 25 09:32:23 compute-0 podman[81669]: 2025-11-25 09:32:23.031157577 +0000 UTC m=+0.026756659 container create aba977c96f1c066487fc2fac24151eef54eacf4ca8c7c7c897e5d3284b2db87b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:32:23 compute-0 systemd[1]: Started libpod-conmon-aba977c96f1c066487fc2fac24151eef54eacf4ca8c7c7c897e5d3284b2db87b.scope.
Nov 25 09:32:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07417b0bad9b3d26e24727f1346ea7aa53ae74c5ecc5a1ec0eb0493733524f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07417b0bad9b3d26e24727f1346ea7aa53ae74c5ecc5a1ec0eb0493733524f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07417b0bad9b3d26e24727f1346ea7aa53ae74c5ecc5a1ec0eb0493733524f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07417b0bad9b3d26e24727f1346ea7aa53ae74c5ecc5a1ec0eb0493733524f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:23 compute-0 podman[81669]: 2025-11-25 09:32:23.080008415 +0000 UTC m=+0.075607497 container init aba977c96f1c066487fc2fac24151eef54eacf4ca8c7c7c897e5d3284b2db87b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:23 compute-0 podman[81669]: 2025-11-25 09:32:23.085275539 +0000 UTC m=+0.080874612 container start aba977c96f1c066487fc2fac24151eef54eacf4ca8c7c7c897e5d3284b2db87b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 09:32:23 compute-0 podman[81669]: 2025-11-25 09:32:23.086551422 +0000 UTC m=+0.082150524 container attach aba977c96f1c066487fc2fac24151eef54eacf4ca8c7c7c897e5d3284b2db87b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:32:23 compute-0 podman[81669]: 2025-11-25 09:32:23.021099643 +0000 UTC m=+0.016698745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:23 compute-0 zen_knuth[81682]: {
Nov 25 09:32:23 compute-0 zen_knuth[81682]:     "1": [
Nov 25 09:32:23 compute-0 zen_knuth[81682]:         {
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             "devices": [
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "/dev/loop3"
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             ],
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             "lv_name": "ceph_lv0",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             "lv_size": "21470642176",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             "name": "ceph_lv0",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             "tags": {
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.cluster_name": "ceph",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.crush_device_class": "",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.encrypted": "0",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.osd_id": "1",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.type": "block",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.vdo": "0",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:                 "ceph.with_tpm": "0"
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             },
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             "type": "block",
Nov 25 09:32:23 compute-0 zen_knuth[81682]:             "vg_name": "ceph_vg0"
Nov 25 09:32:23 compute-0 zen_knuth[81682]:         }
Nov 25 09:32:23 compute-0 zen_knuth[81682]:     ]
Nov 25 09:32:23 compute-0 zen_knuth[81682]: }
Nov 25 09:32:23 compute-0 systemd[1]: libpod-aba977c96f1c066487fc2fac24151eef54eacf4ca8c7c7c897e5d3284b2db87b.scope: Deactivated successfully.
Nov 25 09:32:23 compute-0 conmon[81682]: conmon aba977c96f1c066487fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aba977c96f1c066487fc2fac24151eef54eacf4ca8c7c7c897e5d3284b2db87b.scope/container/memory.events
Nov 25 09:32:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Nov 25 09:32:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 09:32:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:32:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:23 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Nov 25 09:32:23 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Nov 25 09:32:23 compute-0 podman[81691]: 2025-11-25 09:32:23.342972368 +0000 UTC m=+0.016988982 container died aba977c96f1c066487fc2fac24151eef54eacf4ca8c7c7c897e5d3284b2db87b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:32:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c07417b0bad9b3d26e24727f1346ea7aa53ae74c5ecc5a1ec0eb0493733524f9-merged.mount: Deactivated successfully.
Nov 25 09:32:23 compute-0 podman[81691]: 2025-11-25 09:32:23.361758832 +0000 UTC m=+0.035775426 container remove aba977c96f1c066487fc2fac24151eef54eacf4ca8c7c7c897e5d3284b2db87b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_knuth, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 09:32:23 compute-0 systemd[1]: libpod-conmon-aba977c96f1c066487fc2fac24151eef54eacf4ca8c7c7c897e5d3284b2db87b.scope: Deactivated successfully.
Nov 25 09:32:23 compute-0 sudo[81579]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Nov 25 09:32:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 09:32:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:32:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:23 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 25 09:32:23 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 25 09:32:23 compute-0 sudo[81703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:23 compute-0 sudo[81703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:23 compute-0 sudo[81703]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:23 compute-0 sudo[81728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:23 compute-0 sudo[81728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:23 compute-0 podman[81787]: 2025-11-25 09:32:23.757428734 +0000 UTC m=+0.027575870 container create 1a7843d835e9a52e139d79913f682d3f599074c6b0a7bf222b71ca502d78e729 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:23 compute-0 systemd[1]: Started libpod-conmon-1a7843d835e9a52e139d79913f682d3f599074c6b0a7bf222b71ca502d78e729.scope.
Nov 25 09:32:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:23 compute-0 podman[81787]: 2025-11-25 09:32:23.808139254 +0000 UTC m=+0.078286390 container init 1a7843d835e9a52e139d79913f682d3f599074c6b0a7bf222b71ca502d78e729 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:23 compute-0 podman[81787]: 2025-11-25 09:32:23.812045047 +0000 UTC m=+0.082192162 container start 1a7843d835e9a52e139d79913f682d3f599074c6b0a7bf222b71ca502d78e729 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 25 09:32:23 compute-0 podman[81787]: 2025-11-25 09:32:23.813307343 +0000 UTC m=+0.083454468 container attach 1a7843d835e9a52e139d79913f682d3f599074c6b0a7bf222b71ca502d78e729 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_agnesi, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:32:23 compute-0 musing_agnesi[81800]: 167 167
Nov 25 09:32:23 compute-0 systemd[1]: libpod-1a7843d835e9a52e139d79913f682d3f599074c6b0a7bf222b71ca502d78e729.scope: Deactivated successfully.
Nov 25 09:32:23 compute-0 conmon[81800]: conmon 1a7843d835e9a52e139d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a7843d835e9a52e139d79913f682d3f599074c6b0a7bf222b71ca502d78e729.scope/container/memory.events
Nov 25 09:32:23 compute-0 podman[81787]: 2025-11-25 09:32:23.816320314 +0000 UTC m=+0.086467480 container died 1a7843d835e9a52e139d79913f682d3f599074c6b0a7bf222b71ca502d78e729 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_agnesi, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:32:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f1a1ac745d2e3f0ad90dd8248c6d57bbcdcb428d7f56c2541ca6606ecea2ddd-merged.mount: Deactivated successfully.
Nov 25 09:32:23 compute-0 podman[81787]: 2025-11-25 09:32:23.834299118 +0000 UTC m=+0.104446244 container remove 1a7843d835e9a52e139d79913f682d3f599074c6b0a7bf222b71ca502d78e729 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_agnesi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 09:32:23 compute-0 podman[81787]: 2025-11-25 09:32:23.745921894 +0000 UTC m=+0.016069019 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:23 compute-0 systemd[1]: libpod-conmon-1a7843d835e9a52e139d79913f682d3f599074c6b0a7bf222b71ca502d78e729.scope: Deactivated successfully.
Nov 25 09:32:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 09:32:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 09:32:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:24 compute-0 podman[81827]: 2025-11-25 09:32:24.008089417 +0000 UTC m=+0.028229331 container create 400219649ae7e722c4c20f27efb3ca0704dcd1a0dbe43ace67a586737c86c053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate-test, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:32:24 compute-0 systemd[1]: Started libpod-conmon-400219649ae7e722c4c20f27efb3ca0704dcd1a0dbe43ace67a586737c86c053.scope.
Nov 25 09:32:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2eb203cbfdcdaa258d9bcd0929aa967507727a4c97331333169c1b37d38097/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2eb203cbfdcdaa258d9bcd0929aa967507727a4c97331333169c1b37d38097/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2eb203cbfdcdaa258d9bcd0929aa967507727a4c97331333169c1b37d38097/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2eb203cbfdcdaa258d9bcd0929aa967507727a4c97331333169c1b37d38097/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2eb203cbfdcdaa258d9bcd0929aa967507727a4c97331333169c1b37d38097/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:24 compute-0 podman[81827]: 2025-11-25 09:32:24.056011667 +0000 UTC m=+0.076151591 container init 400219649ae7e722c4c20f27efb3ca0704dcd1a0dbe43ace67a586737c86c053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:32:24 compute-0 podman[81827]: 2025-11-25 09:32:24.060880933 +0000 UTC m=+0.081020848 container start 400219649ae7e722c4c20f27efb3ca0704dcd1a0dbe43ace67a586737c86c053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate-test, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:32:24 compute-0 podman[81827]: 2025-11-25 09:32:24.06363085 +0000 UTC m=+0.083770764 container attach 400219649ae7e722c4c20f27efb3ca0704dcd1a0dbe43ace67a586737c86c053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate-test, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:32:24 compute-0 podman[81827]: 2025-11-25 09:32:23.996146224 +0000 UTC m=+0.016286138 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate-test[81840]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Nov 25 09:32:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate-test[81840]:                             [--no-systemd] [--no-tmpfs]
Nov 25 09:32:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate-test[81840]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 09:32:24 compute-0 systemd[1]: libpod-400219649ae7e722c4c20f27efb3ca0704dcd1a0dbe43ace67a586737c86c053.scope: Deactivated successfully.
Nov 25 09:32:24 compute-0 podman[81845]: 2025-11-25 09:32:24.236231269 +0000 UTC m=+0.017491058 container died 400219649ae7e722c4c20f27efb3ca0704dcd1a0dbe43ace67a586737c86c053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 25 09:32:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d2eb203cbfdcdaa258d9bcd0929aa967507727a4c97331333169c1b37d38097-merged.mount: Deactivated successfully.
Nov 25 09:32:24 compute-0 podman[81845]: 2025-11-25 09:32:24.254603774 +0000 UTC m=+0.035863553 container remove 400219649ae7e722c4c20f27efb3ca0704dcd1a0dbe43ace67a586737c86c053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:32:24 compute-0 systemd[1]: libpod-conmon-400219649ae7e722c4c20f27efb3ca0704dcd1a0dbe43ace67a586737c86c053.scope: Deactivated successfully.
Nov 25 09:32:24 compute-0 systemd[1]: Reloading.
Nov 25 09:32:24 compute-0 systemd-sysv-generator[81898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:32:24 compute-0 systemd-rc-local-generator[81895]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:32:24 compute-0 systemd[1]: Reloading.
Nov 25 09:32:24 compute-0 systemd-rc-local-generator[81935]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:32:24 compute-0 systemd-sysv-generator[81938]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:32:24 compute-0 systemd[1]: Starting Ceph osd.1 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:32:24 compute-0 ceph-mon[74207]: Deploying daemon osd.0 on compute-1
Nov 25 09:32:24 compute-0 ceph-mon[74207]: Deploying daemon osd.1 on compute-0
Nov 25 09:32:24 compute-0 ceph-mon[74207]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:24 compute-0 podman[81992]: 2025-11-25 09:32:24.974559313 +0000 UTC m=+0.025486148 container create 8011555c3d4ffaf71181b150da9829221fe4ae94e13c60cf73055ddc4ad09d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:32:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961501ba7387e7389d8780a616a1181e99116497e78342cf331241474e2a6c24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961501ba7387e7389d8780a616a1181e99116497e78342cf331241474e2a6c24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961501ba7387e7389d8780a616a1181e99116497e78342cf331241474e2a6c24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961501ba7387e7389d8780a616a1181e99116497e78342cf331241474e2a6c24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961501ba7387e7389d8780a616a1181e99116497e78342cf331241474e2a6c24/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:25 compute-0 podman[81992]: 2025-11-25 09:32:25.018522681 +0000 UTC m=+0.069449527 container init 8011555c3d4ffaf71181b150da9829221fe4ae94e13c60cf73055ddc4ad09d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:32:25 compute-0 podman[81992]: 2025-11-25 09:32:25.024732159 +0000 UTC m=+0.075658985 container start 8011555c3d4ffaf71181b150da9829221fe4ae94e13c60cf73055ddc4ad09d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:32:25 compute-0 podman[81992]: 2025-11-25 09:32:25.025815869 +0000 UTC m=+0.076742705 container attach 8011555c3d4ffaf71181b150da9829221fe4ae94e13c60cf73055ddc4ad09d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:25 compute-0 podman[81992]: 2025-11-25 09:32:24.964005025 +0000 UTC m=+0.014931880 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate[82005]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-0 bash[81992]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate[82005]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-0 bash[81992]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-0 lvm[82087]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:32:25 compute-0 lvm[82087]: VG ceph_vg0 finished
Nov 25 09:32:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate[82005]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 25 09:32:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate[82005]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-0 bash[81992]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 25 09:32:25 compute-0 bash[81992]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-0 lvm[82091]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:32:25 compute-0 lvm[82091]: VG ceph_vg0 finished
Nov 25 09:32:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate[82005]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-0 bash[81992]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 09:32:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate[82005]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 09:32:25 compute-0 bash[81992]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 09:32:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate[82005]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 25 09:32:25 compute-0 bash[81992]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 25 09:32:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate[82005]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:25 compute-0 bash[81992]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate[82005]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:25 compute-0 bash[81992]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate[82005]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 09:32:25 compute-0 bash[81992]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 09:32:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate[82005]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 09:32:25 compute-0 bash[81992]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 09:32:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate[82005]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 25 09:32:25 compute-0 bash[81992]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 25 09:32:25 compute-0 systemd[1]: libpod-8011555c3d4ffaf71181b150da9829221fe4ae94e13c60cf73055ddc4ad09d08.scope: Deactivated successfully.
Nov 25 09:32:26 compute-0 podman[81992]: 2025-11-25 09:32:26.000277229 +0000 UTC m=+1.051204084 container died 8011555c3d4ffaf71181b150da9829221fe4ae94e13c60cf73055ddc4ad09d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-961501ba7387e7389d8780a616a1181e99116497e78342cf331241474e2a6c24-merged.mount: Deactivated successfully.
Nov 25 09:32:26 compute-0 podman[81992]: 2025-11-25 09:32:26.02366204 +0000 UTC m=+1.074588875 container remove 8011555c3d4ffaf71181b150da9829221fe4ae94e13c60cf73055ddc4ad09d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:32:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:32:26 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:32:26 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:26 compute-0 podman[82245]: 2025-11-25 09:32:26.158033465 +0000 UTC m=+0.027149398 container create c383e3b23555c0a86d76b58a68f4005a8c1b545d78541120162f60efb21cb081 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30251b25a5d01b28dc975db7c5df9612e32c964a0d348b176f87783f927ce120/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30251b25a5d01b28dc975db7c5df9612e32c964a0d348b176f87783f927ce120/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30251b25a5d01b28dc975db7c5df9612e32c964a0d348b176f87783f927ce120/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30251b25a5d01b28dc975db7c5df9612e32c964a0d348b176f87783f927ce120/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30251b25a5d01b28dc975db7c5df9612e32c964a0d348b176f87783f927ce120/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-0 podman[82245]: 2025-11-25 09:32:26.202378422 +0000 UTC m=+0.071494365 container init c383e3b23555c0a86d76b58a68f4005a8c1b545d78541120162f60efb21cb081 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 09:32:26 compute-0 podman[82245]: 2025-11-25 09:32:26.207517786 +0000 UTC m=+0.076633709 container start c383e3b23555c0a86d76b58a68f4005a8c1b545d78541120162f60efb21cb081 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:26 compute-0 bash[82245]: c383e3b23555c0a86d76b58a68f4005a8c1b545d78541120162f60efb21cb081
Nov 25 09:32:26 compute-0 podman[82245]: 2025-11-25 09:32:26.147049619 +0000 UTC m=+0.016165562 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:26 compute-0 systemd[1]: Started Ceph osd.1 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:32:26 compute-0 ceph-osd[82261]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:32:26 compute-0 ceph-osd[82261]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Nov 25 09:32:26 compute-0 ceph-osd[82261]: pidfile_write: ignore empty --pid-file
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:26 compute-0 sudo[81728]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:32:26 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:32:26 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:26 compute-0 sudo[82273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:26 compute-0 sudo[82273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:26 compute-0 sudo[82273]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:26 compute-0 sudo[82298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:32:26 compute-0 sudo[82298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb26ab400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb26ab400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb26ab400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb26ab400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb26ab400 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb188b800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:26 compute-0 podman[82368]: 2025-11-25 09:32:26.620115193 +0000 UTC m=+0.026617696 container create a9395b6433e74454d6a6ab29f043c060bdd1895c0a9d4b94f334bc0a38ea2166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_northcutt, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:26 compute-0 systemd[1]: Started libpod-conmon-a9395b6433e74454d6a6ab29f043c060bdd1895c0a9d4b94f334bc0a38ea2166.scope.
Nov 25 09:32:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:26 compute-0 podman[82368]: 2025-11-25 09:32:26.668580696 +0000 UTC m=+0.075083219 container init a9395b6433e74454d6a6ab29f043c060bdd1895c0a9d4b94f334bc0a38ea2166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_northcutt, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 09:32:26 compute-0 podman[82368]: 2025-11-25 09:32:26.67298092 +0000 UTC m=+0.079483433 container start a9395b6433e74454d6a6ab29f043c060bdd1895c0a9d4b94f334bc0a38ea2166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:26 compute-0 podman[82368]: 2025-11-25 09:32:26.674491443 +0000 UTC m=+0.080993946 container attach a9395b6433e74454d6a6ab29f043c060bdd1895c0a9d4b94f334bc0a38ea2166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:32:26 compute-0 dreamy_northcutt[82383]: 167 167
Nov 25 09:32:26 compute-0 systemd[1]: libpod-a9395b6433e74454d6a6ab29f043c060bdd1895c0a9d4b94f334bc0a38ea2166.scope: Deactivated successfully.
Nov 25 09:32:26 compute-0 conmon[82383]: conmon a9395b6433e74454d6a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9395b6433e74454d6a6ab29f043c060bdd1895c0a9d4b94f334bc0a38ea2166.scope/container/memory.events
Nov 25 09:32:26 compute-0 podman[82368]: 2025-11-25 09:32:26.677262169 +0000 UTC m=+0.083764672 container died a9395b6433e74454d6a6ab29f043c060bdd1895c0a9d4b94f334bc0a38ea2166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:32:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-323f0f568ce0fa028817bf1c94169551c173bf1cd2a3f2370a4c3e6a99adbaad-merged.mount: Deactivated successfully.
Nov 25 09:32:26 compute-0 podman[82368]: 2025-11-25 09:32:26.694851139 +0000 UTC m=+0.101353641 container remove a9395b6433e74454d6a6ab29f043c060bdd1895c0a9d4b94f334bc0a38ea2166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 09:32:26 compute-0 podman[82368]: 2025-11-25 09:32:26.609527393 +0000 UTC m=+0.016029917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:26 compute-0 systemd[1]: libpod-conmon-a9395b6433e74454d6a6ab29f043c060bdd1895c0a9d4b94f334bc0a38ea2166.scope: Deactivated successfully.
Nov 25 09:32:26 compute-0 podman[82404]: 2025-11-25 09:32:26.803409563 +0000 UTC m=+0.027051073 container create 79188e11b43ac44a471e846b018210bf64c46fcd0ad51498fa3ee4d6600d993b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:32:26 compute-0 ceph-osd[82261]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 25 09:32:26 compute-0 systemd[1]: Started libpod-conmon-79188e11b43ac44a471e846b018210bf64c46fcd0ad51498fa3ee4d6600d993b.scope.
Nov 25 09:32:26 compute-0 ceph-osd[82261]: load: jerasure load: lrc 
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 09:32:26 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a168313aa7bcbecfc836cc2b452821bf50352eb5bc6a998109a67877bb7982/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a168313aa7bcbecfc836cc2b452821bf50352eb5bc6a998109a67877bb7982/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a168313aa7bcbecfc836cc2b452821bf50352eb5bc6a998109a67877bb7982/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a168313aa7bcbecfc836cc2b452821bf50352eb5bc6a998109a67877bb7982/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:26 compute-0 podman[82404]: 2025-11-25 09:32:26.860740956 +0000 UTC m=+0.084382466 container init 79188e11b43ac44a471e846b018210bf64c46fcd0ad51498fa3ee4d6600d993b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_germain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 09:32:26 compute-0 ceph-mon[74207]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:26 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:26 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:26 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:26 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:26 compute-0 podman[82404]: 2025-11-25 09:32:26.868914742 +0000 UTC m=+0.092556252 container start 79188e11b43ac44a471e846b018210bf64c46fcd0ad51498fa3ee4d6600d993b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:32:26 compute-0 podman[82404]: 2025-11-25 09:32:26.869799969 +0000 UTC m=+0.093441479 container attach 79188e11b43ac44a471e846b018210bf64c46fcd0ad51498fa3ee4d6600d993b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:32:26 compute-0 podman[82404]: 2025-11-25 09:32:26.791451232 +0000 UTC m=+0.015092762 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:32:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:32:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:27 compute-0 lvm[82505]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:32:27 compute-0 lvm[82505]: VG ceph_vg0 finished
Nov 25 09:32:27 compute-0 blissful_germain[82422]: {}
Nov 25 09:32:27 compute-0 ceph-osd[82261]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 09:32:27 compute-0 ceph-osd[82261]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:27 compute-0 systemd[1]: libpod-79188e11b43ac44a471e846b018210bf64c46fcd0ad51498fa3ee4d6600d993b.scope: Deactivated successfully.
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:27 compute-0 podman[82516]: 2025-11-25 09:32:27.417080468 +0000 UTC m=+0.021634125 container died 79188e11b43ac44a471e846b018210bf64c46fcd0ad51498fa3ee4d6600d993b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_germain, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:32:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-03a168313aa7bcbecfc836cc2b452821bf50352eb5bc6a998109a67877bb7982-merged.mount: Deactivated successfully.
Nov 25 09:32:27 compute-0 podman[82516]: 2025-11-25 09:32:27.437536986 +0000 UTC m=+0.042090643 container remove 79188e11b43ac44a471e846b018210bf64c46fcd0ad51498fa3ee4d6600d993b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 25 09:32:27 compute-0 systemd[1]: libpod-conmon-79188e11b43ac44a471e846b018210bf64c46fcd0ad51498fa3ee4d6600d993b.scope: Deactivated successfully.
Nov 25 09:32:27 compute-0 sudo[82298]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:32:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:32:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:27 compute-0 sudo[82531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:32:27 compute-0 sudo[82531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:27 compute-0 sudo[82531]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:32:27 compute-0 sudo[82556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:27 compute-0 sudo[82556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:27 compute-0 sudo[82556]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:27 compute-0 sudo[82583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:32:27 compute-0 sudo[82583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb26abc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb2894000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb2894000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb2894000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb2894000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluefs mount
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluefs mount shared_bdev_used = 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: RocksDB version: 7.9.2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Git sha 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: DB SUMMARY
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: DB Session ID:  PYYYONWJRBU13VM6FPZC
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: CURRENT file:  CURRENT
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                         Options.error_if_exists: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.create_if_missing: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                                     Options.env: 0x564fb26f7dc0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                                Options.info_log: 0x564fb26fb7e0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                              Options.statistics: (nil)
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.use_fsync: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                              Options.db_log_dir: 
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.write_buffer_manager: 0x564fb27f4a00
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.unordered_write: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.row_cache: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                              Options.wal_filter: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.two_write_queues: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.wal_compression: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.atomic_flush: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.max_background_jobs: 4
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.max_background_compactions: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.max_subcompactions: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.max_open_files: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Compression algorithms supported:
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         kZSTD supported: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         kXpressCompression supported: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         kBZip2Compression supported: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         kLZ4Compression supported: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         kZlibCompression supported: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         kSnappyCompression supported: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb19209b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb19209b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb19209b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8bd1589a-a747-4946-b8c0-d5f56b1c53cc
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063147956934, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063147957149, "job": 1, "event": "recovery_finished"}
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 25 09:32:27 compute-0 ceph-osd[82261]: freelist init
Nov 25 09:32:27 compute-0 ceph-osd[82261]: freelist _read_cfg
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 09:32:27 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bluefs umount
Nov 25 09:32:27 compute-0 ceph-osd[82261]: bdev(0x564fb2894000 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 09:32:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:32:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 podman[82851]: 2025-11-25 09:32:28.062043669 +0000 UTC m=+0.036663678 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 09:32:28 compute-0 podman[82851]: 2025-11-25 09:32:28.139108619 +0000 UTC m=+0.113728607 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bdev(0x564fb2894000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bdev(0x564fb2894000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bdev(0x564fb2894000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bdev(0x564fb2894000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluefs mount
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluefs mount shared_bdev_used = 4718592
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: RocksDB version: 7.9.2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Git sha 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: DB SUMMARY
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: DB Session ID:  PYYYONWJRBU13VM6FPZD
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: CURRENT file:  CURRENT
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                         Options.error_if_exists: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.create_if_missing: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                                     Options.env: 0x564fb28a2310
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                                Options.info_log: 0x564fb26fb980
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                              Options.statistics: (nil)
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.use_fsync: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                              Options.db_log_dir: 
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.write_buffer_manager: 0x564fb27f4a00
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.unordered_write: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.row_cache: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                              Options.wal_filter: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.two_write_queues: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.wal_compression: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.atomic_flush: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.max_background_jobs: 4
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.max_background_compactions: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.max_subcompactions: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.max_open_files: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Compression algorithms supported:
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         kZSTD supported: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         kXpressCompression supported: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         kBZip2Compression supported: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         kLZ4Compression supported: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         kZlibCompression supported: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         kSnappyCompression supported: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fb6c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fb6c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fb6c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fb6c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fb6c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fb6c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fb6c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb1921350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbb00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb19209b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbb00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb19209b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:           Options.merge_operator: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564fb26fbb00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564fb19209b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.compression: LZ4
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.num_levels: 7
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 09:32:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.bloom_locality: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                               Options.ttl: 2592000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                       Options.enable_blob_files: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                           Options.min_blob_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 09:32:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:32:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8bd1589a-a747-4946-b8c0-d5f56b1c53cc
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063148242398, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063148244518, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063148, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8bd1589a-a747-4946-b8c0-d5f56b1c53cc", "db_session_id": "PYYYONWJRBU13VM6FPZD", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063148248642, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063148, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8bd1589a-a747-4946-b8c0-d5f56b1c53cc", "db_session_id": "PYYYONWJRBU13VM6FPZD", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063148250064, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063148, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8bd1589a-a747-4946-b8c0-d5f56b1c53cc", "db_session_id": "PYYYONWJRBU13VM6FPZD", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063148253399, "job": 1, "event": "recovery_finished"}
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564fb28e2000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: DB pointer 0x564fb28b0000
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 25 09:32:28 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:32:28 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:32:28 compute-0 ceph-osd[82261]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 09:32:28 compute-0 ceph-osd[82261]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 09:32:28 compute-0 ceph-osd[82261]: _get_class not permitted to load lua
Nov 25 09:32:28 compute-0 ceph-osd[82261]: _get_class not permitted to load sdk
Nov 25 09:32:28 compute-0 ceph-osd[82261]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 09:32:28 compute-0 ceph-osd[82261]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 09:32:28 compute-0 ceph-osd[82261]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 09:32:28 compute-0 ceph-osd[82261]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 09:32:28 compute-0 ceph-osd[82261]: osd.1 0 load_pgs
Nov 25 09:32:28 compute-0 ceph-osd[82261]: osd.1 0 load_pgs opened 0 pgs
Nov 25 09:32:28 compute-0 ceph-osd[82261]: osd.1 0 log_to_monitors true
Nov 25 09:32:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1[82257]: 2025-11-25T09:32:28.267+0000 7f2535e7d740 -1 osd.1 0 log_to_monitors true
Nov 25 09:32:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 ceph-mon[74207]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Nov 25 09:32:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 09:32:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:32:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 sudo[82583]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:32:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:32:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:28 compute-0 sudo[83139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:32:28 compute-0 sudo[83139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:28 compute-0 sudo[83139]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:28 compute-0 sudo[83164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- inventory --format=json-pretty --filter-for-batch
Nov 25 09:32:28 compute-0 sudo[83164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:32:28 compute-0 podman[83220]: 2025-11-25 09:32:28.713708465 +0000 UTC m=+0.027061011 container create 39833ea698cb79278c1e89ae0e98c857311f93c965340128d1e49ecc411d04a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_swirles, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:32:28 compute-0 systemd[1]: Started libpod-conmon-39833ea698cb79278c1e89ae0e98c857311f93c965340128d1e49ecc411d04a9.scope.
Nov 25 09:32:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:28 compute-0 podman[83220]: 2025-11-25 09:32:28.772669635 +0000 UTC m=+0.086022191 container init 39833ea698cb79278c1e89ae0e98c857311f93c965340128d1e49ecc411d04a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:32:28 compute-0 podman[83220]: 2025-11-25 09:32:28.776827934 +0000 UTC m=+0.090180480 container start 39833ea698cb79278c1e89ae0e98c857311f93c965340128d1e49ecc411d04a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:28 compute-0 podman[83220]: 2025-11-25 09:32:28.77802106 +0000 UTC m=+0.091373616 container attach 39833ea698cb79278c1e89ae0e98c857311f93c965340128d1e49ecc411d04a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_swirles, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:32:28 compute-0 eloquent_swirles[83233]: 167 167
Nov 25 09:32:28 compute-0 systemd[1]: libpod-39833ea698cb79278c1e89ae0e98c857311f93c965340128d1e49ecc411d04a9.scope: Deactivated successfully.
Nov 25 09:32:28 compute-0 podman[83220]: 2025-11-25 09:32:28.702415056 +0000 UTC m=+0.015767622 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:28 compute-0 podman[83238]: 2025-11-25 09:32:28.806364634 +0000 UTC m=+0.017923310 container died 39833ea698cb79278c1e89ae0e98c857311f93c965340128d1e49ecc411d04a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c52702012d2415b3c04d3b695b80eee909123412f29affec693386a77758fab3-merged.mount: Deactivated successfully.
Nov 25 09:32:28 compute-0 podman[83238]: 2025-11-25 09:32:28.82127955 +0000 UTC m=+0.032838216 container remove 39833ea698cb79278c1e89ae0e98c857311f93c965340128d1e49ecc411d04a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_swirles, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:32:28 compute-0 systemd[1]: libpod-conmon-39833ea698cb79278c1e89ae0e98c857311f93c965340128d1e49ecc411d04a9.scope: Deactivated successfully.
Nov 25 09:32:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Nov 25 09:32:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 09:32:28 compute-0 podman[83257]: 2025-11-25 09:32:28.932031193 +0000 UTC m=+0.028002604 container create 8c8835496994c10e54e9918054838dd358ccf50e1f5292cd568708ddf6189a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:32:28 compute-0 systemd[1]: Started libpod-conmon-8c8835496994c10e54e9918054838dd358ccf50e1f5292cd568708ddf6189a05.scope.
Nov 25 09:32:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dacab331a188d1a92964cb76b1f3c3a078e490bbd79267c61c924e8f10f04feb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dacab331a188d1a92964cb76b1f3c3a078e490bbd79267c61c924e8f10f04feb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dacab331a188d1a92964cb76b1f3c3a078e490bbd79267c61c924e8f10f04feb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dacab331a188d1a92964cb76b1f3c3a078e490bbd79267c61c924e8f10f04feb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:28 compute-0 podman[83257]: 2025-11-25 09:32:28.987470244 +0000 UTC m=+0.083441665 container init 8c8835496994c10e54e9918054838dd358ccf50e1f5292cd568708ddf6189a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_sinoussi, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:32:28 compute-0 podman[83257]: 2025-11-25 09:32:28.99276476 +0000 UTC m=+0.088736161 container start 8c8835496994c10e54e9918054838dd358ccf50e1f5292cd568708ddf6189a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 25 09:32:28 compute-0 podman[83257]: 2025-11-25 09:32:28.993932959 +0000 UTC m=+0.089904360 container attach 8c8835496994c10e54e9918054838dd358ccf50e1f5292cd568708ddf6189a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_sinoussi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:32:29 compute-0 podman[83257]: 2025-11-25 09:32:28.920661171 +0000 UTC m=+0.016632582 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:32:29 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 09:32:29 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 09:32:29 compute-0 ceph-mon[74207]: from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 09:32:29 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 ceph-mon[74207]: from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:29 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 09:32:29 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:32:29 compute-0 ceph-mgr[74476]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5248M
Nov 25 09:32:29 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5248M
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]: [
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:     {
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:         "available": false,
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:         "being_replaced": false,
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:         "ceph_device_lvm": false,
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:         "lsm_data": {},
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:         "lvs": [],
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:         "path": "/dev/sr0",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:         "rejected_reasons": [
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "Has a FileSystem",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "Insufficient space (<5GB)"
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:         ],
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:         "sys_api": {
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "actuators": null,
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "device_nodes": [
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:                 "sr0"
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             ],
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "devname": "sr0",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "human_readable_size": "474.00 KB",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "id_bus": "ata",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "model": "QEMU DVD-ROM",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "nr_requests": "64",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "parent": "/dev/sr0",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "partitions": {},
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "path": "/dev/sr0",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "removable": "1",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "rev": "2.5+",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "ro": "0",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "rotational": "1",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "sas_address": "",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "sas_device_handle": "",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "scheduler_mode": "mq-deadline",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "sectors": 0,
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "sectorsize": "2048",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "size": 485376.0,
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "support_discard": "2048",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "type": "disk",
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:             "vendor": "QEMU"
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:         }
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]:     }
Nov 25 09:32:29 compute-0 dreamy_sinoussi[83270]: ]
Nov 25 09:32:29 compute-0 systemd[1]: libpod-8c8835496994c10e54e9918054838dd358ccf50e1f5292cd568708ddf6189a05.scope: Deactivated successfully.
Nov 25 09:32:29 compute-0 podman[83257]: 2025-11-25 09:32:29.433249644 +0000 UTC m=+0.529221045 container died 8c8835496994c10e54e9918054838dd358ccf50e1f5292cd568708ddf6189a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_sinoussi, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-dacab331a188d1a92964cb76b1f3c3a078e490bbd79267c61c924e8f10f04feb-merged.mount: Deactivated successfully.
Nov 25 09:32:29 compute-0 podman[83257]: 2025-11-25 09:32:29.456514779 +0000 UTC m=+0.552486180 container remove 8c8835496994c10e54e9918054838dd358ccf50e1f5292cd568708ddf6189a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 25 09:32:29 compute-0 sudo[84253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxntmztoorwqqxmbwpsylxnslekjxwyu ; /usr/bin/python3'
Nov 25 09:32:29 compute-0 sudo[84253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:32:29 compute-0 systemd[1]: libpod-conmon-8c8835496994c10e54e9918054838dd358ccf50e1f5292cd568708ddf6189a05.scope: Deactivated successfully.
Nov 25 09:32:29 compute-0 sudo[83164]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:32:29 compute-0 ceph-mgr[74476]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.7M
Nov 25 09:32:29 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.7M
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 25 09:32:29 compute-0 ceph-mgr[74476]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134963200: error parsing value: Value '134963200' is below minimum 939524096
Nov 25 09:32:29 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134963200: error parsing value: Value '134963200' is below minimum 939524096
Nov 25 09:32:29 compute-0 python3[84262]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:32:29 compute-0 podman[84264]: 2025-11-25 09:32:29.617959999 +0000 UTC m=+0.024889804 container create 37d24d13feec58a6b05c6a8b57c5d00413f02d27843bbe8676d7f8f0fddc0693 (image=quay.io/ceph/ceph:v19, name=quirky_tharp, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:32:29 compute-0 systemd[1]: Started libpod-conmon-37d24d13feec58a6b05c6a8b57c5d00413f02d27843bbe8676d7f8f0fddc0693.scope.
Nov 25 09:32:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065f93bf585527eef24f39451dbbd62f5a1258cb0c42beb19951b9cb1e9339a3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065f93bf585527eef24f39451dbbd62f5a1258cb0c42beb19951b9cb1e9339a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065f93bf585527eef24f39451dbbd62f5a1258cb0c42beb19951b9cb1e9339a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:32:29 compute-0 podman[84264]: 2025-11-25 09:32:29.669044342 +0000 UTC m=+0.075974167 container init 37d24d13feec58a6b05c6a8b57c5d00413f02d27843bbe8676d7f8f0fddc0693 (image=quay.io/ceph/ceph:v19, name=quirky_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:32:29 compute-0 podman[84264]: 2025-11-25 09:32:29.672920098 +0000 UTC m=+0.079849903 container start 37d24d13feec58a6b05c6a8b57c5d00413f02d27843bbe8676d7f8f0fddc0693 (image=quay.io/ceph/ceph:v19, name=quirky_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:32:29 compute-0 podman[84264]: 2025-11-25 09:32:29.674123844 +0000 UTC m=+0.081053669 container attach 37d24d13feec58a6b05c6a8b57c5d00413f02d27843bbe8676d7f8f0fddc0693 (image=quay.io/ceph/ceph:v19, name=quirky_tharp, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 25 09:32:29 compute-0 podman[84264]: 2025-11-25 09:32:29.608417256 +0000 UTC m=+0.015347081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:32:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 25 09:32:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/611149476' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:32:29 compute-0 quirky_tharp[84277]: 
Nov 25 09:32:29 compute-0 quirky_tharp[84277]: {"fsid":"af1c9ae3-08d7-5547-a53d-2cccf7c6ef90","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":72,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":6,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1764063139,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-11-25T09:31:16:071954+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-25T09:31:16.073226+0000","services":{}},"progress_events":{}}
Nov 25 09:32:30 compute-0 systemd[1]: libpod-37d24d13feec58a6b05c6a8b57c5d00413f02d27843bbe8676d7f8f0fddc0693.scope: Deactivated successfully.
Nov 25 09:32:30 compute-0 conmon[84277]: conmon 37d24d13feec58a6b05c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37d24d13feec58a6b05c6a8b57c5d00413f02d27843bbe8676d7f8f0fddc0693.scope/container/memory.events
Nov 25 09:32:30 compute-0 podman[84302]: 2025-11-25 09:32:30.031141967 +0000 UTC m=+0.016380205 container died 37d24d13feec58a6b05c6a8b57c5d00413f02d27843bbe8676d7f8f0fddc0693 (image=quay.io/ceph/ceph:v19, name=quirky_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:32:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-065f93bf585527eef24f39451dbbd62f5a1258cb0c42beb19951b9cb1e9339a3-merged.mount: Deactivated successfully.
Nov 25 09:32:30 compute-0 podman[84302]: 2025-11-25 09:32:30.049203928 +0000 UTC m=+0.034442135 container remove 37d24d13feec58a6b05c6a8b57c5d00413f02d27843bbe8676d7f8f0fddc0693 (image=quay.io/ceph/ceph:v19, name=quirky_tharp, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:32:30 compute-0 systemd[1]: libpod-conmon-37d24d13feec58a6b05c6a8b57c5d00413f02d27843bbe8676d7f8f0fddc0693.scope: Deactivated successfully.
Nov 25 09:32:30 compute-0 sudo[84253]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 25 09:32:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 09:32:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 09:32:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 25 09:32:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Nov 25 09:32:30 compute-0 ceph-osd[82261]: osd.1 0 done with init, starting boot process
Nov 25 09:32:30 compute-0 ceph-osd[82261]: osd.1 0 start_boot
Nov 25 09:32:30 compute-0 ceph-osd[82261]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 09:32:30 compute-0 ceph-osd[82261]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 09:32:30 compute-0 ceph-osd[82261]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 09:32:30 compute-0 ceph-osd[82261]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 09:32:30 compute-0 ceph-osd[82261]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 25 09:32:30 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Nov 25 09:32:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 25 09:32:30 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 09:32:30 compute-0 ceph-mon[74207]: osdmap e6: 2 total, 0 up, 2 in
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:30 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:32:30 compute-0 ceph-mon[74207]: Adjusting osd_memory_target on compute-1 to  5248M
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:32:30 compute-0 ceph-mon[74207]: Adjusting osd_memory_target on compute-0 to 128.7M
Nov 25 09:32:30 compute-0 ceph-mon[74207]: Unable to set osd_memory_target on compute-0 to 134963200: error parsing value: Value '134963200' is below minimum 939524096
Nov 25 09:32:30 compute-0 ceph-mon[74207]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:30 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/611149476' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:32:30 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1629670021; not ready for session (expect reconnect)
Nov 25 09:32:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 25 09:32:30 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 25 09:32:30 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:30 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 09:32:30 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3040262040; not ready for session (expect reconnect)
Nov 25 09:32:30 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 09:32:31 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1629670021; not ready for session (expect reconnect)
Nov 25 09:32:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 25 09:32:31 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:31 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 09:32:31 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3040262040; not ready for session (expect reconnect)
Nov 25 09:32:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 25 09:32:31 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:31 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 09:32:31 compute-0 ceph-mon[74207]: purged_snaps scrub starts
Nov 25 09:32:31 compute-0 ceph-mon[74207]: purged_snaps scrub ok
Nov 25 09:32:31 compute-0 ceph-mon[74207]: from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 09:32:31 compute-0 ceph-mon[74207]: from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 25 09:32:31 compute-0 ceph-mon[74207]: osdmap e7: 2 total, 0 up, 2 in
Nov 25 09:32:31 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:31 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:31 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:31 compute-0 ceph-osd[82261]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 83.473 iops: 21369.172 elapsed_sec: 0.140
Nov 25 09:32:31 compute-0 ceph-osd[82261]: log_channel(cluster) log [WRN] : OSD bench result of 21369.172304 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 09:32:31 compute-0 ceph-osd[82261]: osd.1 0 waiting for initial osdmap
Nov 25 09:32:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1[82257]: 2025-11-25T09:32:31.903+0000 7f2531e00640 -1 osd.1 0 waiting for initial osdmap
Nov 25 09:32:31 compute-0 ceph-osd[82261]: osd.1 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 25 09:32:31 compute-0 ceph-osd[82261]: osd.1 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 25 09:32:31 compute-0 ceph-osd[82261]: osd.1 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 25 09:32:31 compute-0 ceph-osd[82261]: osd.1 7 check_osdmap_features require_osd_release unknown -> squid
Nov 25 09:32:31 compute-0 ceph-osd[82261]: osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 09:32:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-1[82257]: 2025-11-25T09:32:31.917+0000 7f252d428640 -1 osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 09:32:31 compute-0 ceph-osd[82261]: osd.1 7 set_numa_affinity not setting numa affinity
Nov 25 09:32:31 compute-0 ceph-osd[82261]: osd.1 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1629670021; not ready for session (expect reconnect)
Nov 25 09:32:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 25 09:32:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3040262040; not ready for session (expect reconnect)
Nov 25 09:32:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 25 09:32:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 09:32:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 25 09:32:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 09:32:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e8 e8: 2 total, 2 up, 2 in
Nov 25 09:32:32 compute-0 ceph-mon[74207]: purged_snaps scrub starts
Nov 25 09:32:32 compute-0 ceph-mon[74207]: purged_snaps scrub ok
Nov 25 09:32:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[74207]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 09:32:32 compute-0 ceph-mon[74207]: OSD bench result of 21369.172304 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 09:32:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021] boot
Nov 25 09:32:32 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040] boot
Nov 25 09:32:32 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 2 up, 2 in
Nov 25 09:32:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 25 09:32:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 25 09:32:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:32 compute-0 ceph-osd[82261]: osd.1 8 state: booting -> active
Nov 25 09:32:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:32:32
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [balancer INFO root] No pools available
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [devicehealth INFO root] creating mgr pool
Nov 25 09:32:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Nov 25 09:32:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:32:32 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:32:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 25 09:32:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 09:32:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 09:32:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Nov 25 09:32:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e9 crush map has features 3314933000852226048, adjusting msgr requires
Nov 25 09:32:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 09:32:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 09:32:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 09:32:33 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Nov 25 09:32:33 compute-0 ceph-mon[74207]: OSD bench result of 23848.115549 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 09:32:33 compute-0 ceph-mon[74207]: osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021] boot
Nov 25 09:32:33 compute-0 ceph-mon[74207]: osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040] boot
Nov 25 09:32:33 compute-0 ceph-mon[74207]: osdmap e8: 2 total, 2 up, 2 in
Nov 25 09:32:33 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:32:33 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:32:33 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 09:32:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Nov 25 09:32:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 09:32:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 25 09:32:34 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:32:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 09:32:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Nov 25 09:32:34 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Nov 25 09:32:34 compute-0 ceph-osd[82261]: osd.1 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 09:32:34 compute-0 ceph-osd[82261]: osd.1 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 25 09:32:34 compute-0 ceph-osd[82261]: osd.1 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 09:32:34 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 09:32:34 compute-0 ceph-mon[74207]: osdmap e9: 2 total, 2 up, 2 in
Nov 25 09:32:34 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 09:32:34 compute-0 ceph-mon[74207]: pgmap v32: 1 pgs: 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:34 compute-0 ceph-mgr[74476]: [devicehealth INFO root] creating main.db for devicehealth
Nov 25 09:32:34 compute-0 ceph-mgr[74476]: [devicehealth INFO root] Check health
Nov 25 09:32:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 09:32:34 compute-0 sudo[84328]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 25 09:32:34 compute-0 sudo[84328]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 09:32:34 compute-0 sudo[84328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 25 09:32:34 compute-0 sudo[84328]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 09:32:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 25 09:32:34 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:32:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 25 09:32:35 compute-0 ceph-mon[74207]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:32:35 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 09:32:35 compute-0 ceph-mon[74207]: osdmap e10: 2 total, 2 up, 2 in
Nov 25 09:32:35 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 09:32:35 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 09:32:35 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:32:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Nov 25 09:32:35 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.zcfgby(active, since 62s)
Nov 25 09:32:35 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Nov 25 09:32:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:36 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 09:32:36 compute-0 ceph-mon[74207]: mgrmap e9: compute-0.zcfgby(active, since 62s)
Nov 25 09:32:36 compute-0 ceph-mon[74207]: osdmap e11: 2 total, 2 up, 2 in
Nov 25 09:32:36 compute-0 ceph-mon[74207]: pgmap v35: 1 pgs: 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:37 compute-0 ceph-mon[74207]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 09:32:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:32:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:38 compute-0 ceph-mon[74207]: pgmap v36: 1 pgs: 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:40 compute-0 ceph-mon[74207]: pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:32:42 compute-0 ceph-mon[74207]: pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:44 compute-0 ceph-mon[74207]: pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:46 compute-0 ceph-mon[74207]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:32:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:32:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:32:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:32:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 25 09:32:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:32:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:32:47 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:32:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:32:47 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:32:47 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:32:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:32:47 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:32:47 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:32:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:48 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:32:48 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:32:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:32:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:48 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:32:48 compute-0 ceph-mon[74207]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:32:48 compute-0 ceph-mon[74207]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:32:48 compute-0 ceph-mon[74207]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:48 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:32:48 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:32:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:32:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:32:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:32:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:48 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:48 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev d30fde1a-6edc-4c8e-8d66-71f1320ef42a (Updating mon deployment (+2 -> 3))
Nov 25 09:32:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 25 09:32:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:32:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 25 09:32:48 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:32:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:32:48 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:48 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Nov 25 09:32:48 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Nov 25 09:32:49 compute-0 ceph-mon[74207]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:32:49 compute-0 ceph-mon[74207]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:32:49 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:49 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:49 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:49 compute-0 ceph-mon[74207]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:49 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:32:49 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:32:49 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:49 compute-0 ceph-mon[74207]: Deploying daemon mon.compute-2 on compute-2
Nov 25 09:32:49 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 25 09:32:49 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 09:32:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 25 09:32:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 25 09:32:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 25 09:32:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Nov 25 09:32:50 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2967379413; not ready for session (expect reconnect)
Nov 25 09:32:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 25 09:32:50 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:50 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Nov 25 09:32:50 compute-0 ceph-mon[74207]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 25 09:32:50 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:32:50 compute-0 ceph-mon[74207]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 25 09:32:50 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:50 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 25 09:32:50 compute-0 ceph-mon[74207]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Nov 25 09:32:50 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 25 09:32:50 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:32:50 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:51 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:32:51 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2967379413; not ready for session (expect reconnect)
Nov 25 09:32:51 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 25 09:32:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:51 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 25 09:32:52 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2967379413; not ready for session (expect reconnect)
Nov 25 09:32:52 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 25 09:32:52 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:52 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 25 09:32:52 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:53 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2967379413; not ready for session (expect reconnect)
Nov 25 09:32:53 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 25 09:32:53 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:53 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 25 09:32:54 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2967379413; not ready for session (expect reconnect)
Nov 25 09:32:54 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 25 09:32:54 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:54 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 25 09:32:54 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:55 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2967379413; not ready for session (expect reconnect)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 25 09:32:55 compute-0 ceph-mon[74207]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Nov 25 09:32:55 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : monmap epoch 2
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : last_changed 2025-11-25T09:32:50.766337+0000
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : created 2025-11-25T09:31:14.695764+0000
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 25 09:32:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.zcfgby(active, since 82s)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Nov 25 09:32:55 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mon[74207]: mon.compute-0 calling monitor election
Nov 25 09:32:55 compute-0 ceph-mon[74207]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mon[74207]: mon.compute-2 calling monitor election
Nov 25 09:32:55 compute-0 ceph-mon[74207]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mon[74207]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mon[74207]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: monmap epoch 2
Nov 25 09:32:55 compute-0 ceph-mon[74207]: fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:32:55 compute-0 ceph-mon[74207]: last_changed 2025-11-25T09:32:50.766337+0000
Nov 25 09:32:55 compute-0 ceph-mon[74207]: created 2025-11-25T09:31:14.695764+0000
Nov 25 09:32:55 compute-0 ceph-mon[74207]: min_mon_release 19 (squid)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: election_strategy: 1
Nov 25 09:32:55 compute-0 ceph-mon[74207]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 25 09:32:55 compute-0 ceph-mon[74207]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 25 09:32:55 compute-0 ceph-mon[74207]: fsmap 
Nov 25 09:32:55 compute-0 ceph-mon[74207]: osdmap e11: 2 total, 2 up, 2 in
Nov 25 09:32:55 compute-0 ceph-mon[74207]: mgrmap e9: compute-0.zcfgby(active, since 82s)
Nov 25 09:32:55 compute-0 ceph-mon[74207]: overall HEALTH_OK
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:32:55 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:56 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2967379413; not ready for session (expect reconnect)
Nov 25 09:32:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 25 09:32:56 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:56 compute-0 ceph-mon[74207]: Deploying daemon mon.compute-1 on compute-1
Nov 25 09:32:56 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:56 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev d30fde1a-6edc-4c8e-8d66-71f1320ef42a (Updating mon deployment (+2 -> 3))
Nov 25 09:32:57 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event d30fde1a-6edc-4c8e-8d66-71f1320ef42a (Updating mon deployment (+2 -> 3)) in 8 seconds
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:32:57 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev ddf431f3-dc01-4f18-9acf-2669b9a234af (Updating mgr deployment (+2 -> 3))
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.flybft", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.flybft", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.flybft", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:32:57 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.flybft on compute-2
Nov 25 09:32:57 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.flybft on compute-2
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Nov 25 09:32:57 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3424518606; not ready for session (expect reconnect)
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:32:57 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 25 09:32:57 compute-0 ceph-mon[74207]: paxos.0).electionLogic(10) init, last seen epoch 10
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 25 09:32:57 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:32:57 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 25 09:32:57 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 3 completed events
Nov 25 09:32:57 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:32:58 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3424518606; not ready for session (expect reconnect)
Nov 25 09:32:58 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 25 09:32:58 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:32:58 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 25 09:32:58 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 25 09:32:58 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:32:58 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 25 09:32:58 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 25 09:32:58 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:32:59 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3424518606; not ready for session (expect reconnect)
Nov 25 09:32:59 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 25 09:32:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:32:59 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 25 09:32:59 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:00 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3424518606; not ready for session (expect reconnect)
Nov 25 09:33:00 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 25 09:33:00 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:00 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 25 09:33:00 compute-0 sudo[84354]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjugprlrwqpxhaxupwoehrvbxuhcvhyz ; /usr/bin/python3'
Nov 25 09:33:00 compute-0 sudo[84354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:00 compute-0 python3[84356]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:00 compute-0 podman[84358]: 2025-11-25 09:33:00.269879594 +0000 UTC m=+0.027753104 container create d75f9e047b74b9e6f9b5d363bb2a7df86588451a6cf947e7cd2b66386a53bcf6 (image=quay.io/ceph/ceph:v19, name=vibrant_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:33:00 compute-0 systemd[1]: Started libpod-conmon-d75f9e047b74b9e6f9b5d363bb2a7df86588451a6cf947e7cd2b66386a53bcf6.scope.
Nov 25 09:33:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ffad829a0bc03702cfe3b73a75125abe2b4371eb1bde78ed1dfc024e6c39d53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ffad829a0bc03702cfe3b73a75125abe2b4371eb1bde78ed1dfc024e6c39d53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ffad829a0bc03702cfe3b73a75125abe2b4371eb1bde78ed1dfc024e6c39d53/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:00 compute-0 podman[84358]: 2025-11-25 09:33:00.317444101 +0000 UTC m=+0.075317601 container init d75f9e047b74b9e6f9b5d363bb2a7df86588451a6cf947e7cd2b66386a53bcf6 (image=quay.io/ceph/ceph:v19, name=vibrant_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 09:33:00 compute-0 podman[84358]: 2025-11-25 09:33:00.321862167 +0000 UTC m=+0.079735667 container start d75f9e047b74b9e6f9b5d363bb2a7df86588451a6cf947e7cd2b66386a53bcf6 (image=quay.io/ceph/ceph:v19, name=vibrant_liskov, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:33:00 compute-0 podman[84358]: 2025-11-25 09:33:00.32329741 +0000 UTC m=+0.081170910 container attach d75f9e047b74b9e6f9b5d363bb2a7df86588451a6cf947e7cd2b66386a53bcf6 (image=quay.io/ceph/ceph:v19, name=vibrant_liskov, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:00 compute-0 podman[84358]: 2025-11-25 09:33:00.258575656 +0000 UTC m=+0.016449156 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:00 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:00 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:00 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:01 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:01 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3424518606; not ready for session (expect reconnect)
Nov 25 09:33:01 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 25 09:33:01 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:01 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 25 09:33:01 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:01 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:01 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:01 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 25 09:33:02 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3424518606; not ready for session (expect reconnect)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 25 09:33:02 compute-0 ceph-mon[74207]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : monmap epoch 3
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : last_changed 2025-11-25T09:32:57.086503+0000
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : created 2025-11-25T09:31:14.695764+0000
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.zcfgby(active, since 89s)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.plffrn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.plffrn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.plffrn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.plffrn on compute-1
Nov 25 09:33:02 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.plffrn on compute-1
Nov 25 09:33:02 compute-0 ceph-mon[74207]: Deploying daemon mgr.compute-2.flybft on compute-2
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-0 calling monitor election
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-2 calling monitor election
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-1 calling monitor election
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: monmap epoch 3
Nov 25 09:33:02 compute-0 ceph-mon[74207]: fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:02 compute-0 ceph-mon[74207]: last_changed 2025-11-25T09:32:57.086503+0000
Nov 25 09:33:02 compute-0 ceph-mon[74207]: created 2025-11-25T09:31:14.695764+0000
Nov 25 09:33:02 compute-0 ceph-mon[74207]: min_mon_release 19 (squid)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: election_strategy: 1
Nov 25 09:33:02 compute-0 ceph-mon[74207]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 25 09:33:02 compute-0 ceph-mon[74207]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 25 09:33:02 compute-0 ceph-mon[74207]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Nov 25 09:33:02 compute-0 ceph-mon[74207]: fsmap 
Nov 25 09:33:02 compute-0 ceph-mon[74207]: osdmap e11: 2 total, 2 up, 2 in
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mgrmap e9: compute-0.zcfgby(active, since 89s)
Nov 25 09:33:02 compute-0 ceph-mon[74207]: overall HEALTH_OK
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.plffrn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.plffrn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:02 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:02 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:33:02 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:33:02 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:33:02 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:33:02 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:33:02 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:33:03 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3424518606; not ready for session (expect reconnect)
Nov 25 09:33:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 25 09:33:03 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:03 compute-0 ceph-mon[74207]: Deploying daemon mgr.compute-1.plffrn on compute-1
Nov 25 09:33:03 compute-0 ceph-mon[74207]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:03 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 25 09:33:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:03 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev ddf431f3-dc01-4f18-9acf-2669b9a234af (Updating mgr deployment (+2 -> 3))
Nov 25 09:33:03 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event ddf431f3-dc01-4f18-9acf-2669b9a234af (Updating mgr deployment (+2 -> 3)) in 6 seconds
Nov 25 09:33:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 25 09:33:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:03 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev de12f8d2-b992-4dfd-a5b3-a17c99ace693 (Updating crash deployment (+1 -> 3))
Nov 25 09:33:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 25 09:33:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:33:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 09:33:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:03 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:03 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Nov 25 09:33:03 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Nov 25 09:33:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 25 09:33:03 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2646714900' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:33:03 compute-0 vibrant_liskov[84371]: 
Nov 25 09:33:03 compute-0 vibrant_liskov[84371]: {"fsid":"af1c9ae3-08d7-5547-a53d-2cccf7c6ef90","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":1,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":11,"num_osds":2,"num_up_osds":2,"osd_up_since":1764063152,"num_in_osds":2,"osd_in_since":1764063139,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":55771136,"bytes_avail":42885513216,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-11-25T09:31:16:071954+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T09:32:33.847154+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"ddf431f3-dc01-4f18-9acf-2669b9a234af":{"message":"Updating mgr deployment (+2 -> 3) (5s)\n      [==============..............] (remaining: 5s)","progress":0.5,"add_to_ceph_s":true}}}
Nov 25 09:33:03 compute-0 systemd[1]: libpod-d75f9e047b74b9e6f9b5d363bb2a7df86588451a6cf947e7cd2b66386a53bcf6.scope: Deactivated successfully.
Nov 25 09:33:03 compute-0 conmon[84371]: conmon d75f9e047b74b9e6f9b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d75f9e047b74b9e6f9b5d363bb2a7df86588451a6cf947e7cd2b66386a53bcf6.scope/container/memory.events
Nov 25 09:33:03 compute-0 podman[84358]: 2025-11-25 09:33:03.643613396 +0000 UTC m=+3.401486885 container died d75f9e047b74b9e6f9b5d363bb2a7df86588451a6cf947e7cd2b66386a53bcf6 (image=quay.io/ceph/ceph:v19, name=vibrant_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:33:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ffad829a0bc03702cfe3b73a75125abe2b4371eb1bde78ed1dfc024e6c39d53-merged.mount: Deactivated successfully.
Nov 25 09:33:03 compute-0 podman[84358]: 2025-11-25 09:33:03.670034091 +0000 UTC m=+3.427907590 container remove d75f9e047b74b9e6f9b5d363bb2a7df86588451a6cf947e7cd2b66386a53bcf6 (image=quay.io/ceph/ceph:v19, name=vibrant_liskov, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:03 compute-0 systemd[1]: libpod-conmon-d75f9e047b74b9e6f9b5d363bb2a7df86588451a6cf947e7cd2b66386a53bcf6.scope: Deactivated successfully.
Nov 25 09:33:03 compute-0 sudo[84354]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:03 compute-0 sudo[84428]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbqtjwritqhamviojgqejfymiesbjnbi ; /usr/bin/python3'
Nov 25 09:33:03 compute-0 sudo[84428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:03 compute-0 python3[84430]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:04 compute-0 podman[84431]: 2025-11-25 09:33:04.038064123 +0000 UTC m=+0.029281001 container create 347ceb72c6d1e0463b2fd3fb65b4f3c8c0d236ecca0edcc8799b888d0776caeb (image=quay.io/ceph/ceph:v19, name=optimistic_kapitsa, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:33:04 compute-0 systemd[1]: Started libpod-conmon-347ceb72c6d1e0463b2fd3fb65b4f3c8c0d236ecca0edcc8799b888d0776caeb.scope.
Nov 25 09:33:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4890a804a65005809f4c9801b0d4722ebee813a24b52089781e4750e104cced9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4890a804a65005809f4c9801b0d4722ebee813a24b52089781e4750e104cced9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:04 compute-0 ceph-mgr[74476]: mgr.server handle_report got status from non-daemon mon.compute-1
Nov 25 09:33:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:04.087+0000 7fdac49b1640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Nov 25 09:33:04 compute-0 podman[84431]: 2025-11-25 09:33:04.091140326 +0000 UTC m=+0.082357204 container init 347ceb72c6d1e0463b2fd3fb65b4f3c8c0d236ecca0edcc8799b888d0776caeb (image=quay.io/ceph/ceph:v19, name=optimistic_kapitsa, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:04 compute-0 podman[84431]: 2025-11-25 09:33:04.095201419 +0000 UTC m=+0.086418298 container start 347ceb72c6d1e0463b2fd3fb65b4f3c8c0d236ecca0edcc8799b888d0776caeb (image=quay.io/ceph/ceph:v19, name=optimistic_kapitsa, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:04 compute-0 podman[84431]: 2025-11-25 09:33:04.097915529 +0000 UTC m=+0.089132427 container attach 347ceb72c6d1e0463b2fd3fb65b4f3c8c0d236ecca0edcc8799b888d0776caeb (image=quay.io/ceph/ceph:v19, name=optimistic_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:04 compute-0 podman[84431]: 2025-11-25 09:33:04.027043046 +0000 UTC m=+0.018259934 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:04 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:04 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:04 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:04 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:04 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:33:04 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 09:33:04 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:04 compute-0 ceph-mon[74207]: Deploying daemon crash.compute-2 on compute-2
Nov 25 09:33:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2646714900' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:33:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 25 09:33:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2513898650' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 25 09:33:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:04 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev de12f8d2-b992-4dfd-a5b3-a17c99ace693 (Updating crash deployment (+1 -> 3))
Nov 25 09:33:04 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event de12f8d2-b992-4dfd-a5b3-a17c99ace693 (Updating crash deployment (+1 -> 3)) in 1 seconds
Nov 25 09:33:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 25 09:33:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:33:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:33:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:33:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:04 compute-0 sudo[84469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:04 compute-0 sudo[84469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:04 compute-0 sudo[84469]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:04 compute-0 sudo[84494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:33:04 compute-0 sudo[84494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:04 compute-0 podman[84552]: 2025-11-25 09:33:04.802472331 +0000 UTC m=+0.030385750 container create 41356ee8149f8c29c4346e1b9a70048482fcf38f6c59a8e8815266cf624915f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:33:04 compute-0 systemd[1]: Started libpod-conmon-41356ee8149f8c29c4346e1b9a70048482fcf38f6c59a8e8815266cf624915f1.scope.
Nov 25 09:33:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:04 compute-0 podman[84552]: 2025-11-25 09:33:04.850118111 +0000 UTC m=+0.078031530 container init 41356ee8149f8c29c4346e1b9a70048482fcf38f6c59a8e8815266cf624915f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:33:04 compute-0 podman[84552]: 2025-11-25 09:33:04.8537351 +0000 UTC m=+0.081648519 container start 41356ee8149f8c29c4346e1b9a70048482fcf38f6c59a8e8815266cf624915f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_einstein, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:04 compute-0 podman[84552]: 2025-11-25 09:33:04.855065474 +0000 UTC m=+0.082978892 container attach 41356ee8149f8c29c4346e1b9a70048482fcf38f6c59a8e8815266cf624915f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_einstein, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:04 compute-0 ecstatic_einstein[84565]: 167 167
Nov 25 09:33:04 compute-0 systemd[1]: libpod-41356ee8149f8c29c4346e1b9a70048482fcf38f6c59a8e8815266cf624915f1.scope: Deactivated successfully.
Nov 25 09:33:04 compute-0 podman[84552]: 2025-11-25 09:33:04.8582987 +0000 UTC m=+0.086212119 container died 41356ee8149f8c29c4346e1b9a70048482fcf38f6c59a8e8815266cf624915f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:04 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee7e4980dbcad95dc725b8c18035c8b057ee8111f28fc563f6ba9ef281e507af-merged.mount: Deactivated successfully.
Nov 25 09:33:04 compute-0 podman[84552]: 2025-11-25 09:33:04.874019074 +0000 UTC m=+0.101932492 container remove 41356ee8149f8c29c4346e1b9a70048482fcf38f6c59a8e8815266cf624915f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_einstein, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:04 compute-0 podman[84552]: 2025-11-25 09:33:04.78837806 +0000 UTC m=+0.016291489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:04 compute-0 systemd[1]: libpod-conmon-41356ee8149f8c29c4346e1b9a70048482fcf38f6c59a8e8815266cf624915f1.scope: Deactivated successfully.
Nov 25 09:33:04 compute-0 podman[84588]: 2025-11-25 09:33:04.980821882 +0000 UTC m=+0.026181025 container create 5c5e6801eac2309424db9a32749a4a8b509ce2d7ea837b2ef79f257c3f0242bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_leakey, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 09:33:05 compute-0 systemd[1]: Started libpod-conmon-5c5e6801eac2309424db9a32749a4a8b509ce2d7ea837b2ef79f257c3f0242bc.scope.
Nov 25 09:33:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d757b5380346417adc3689ae40dae5285277bf4a0688a23968c889cd27621b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d757b5380346417adc3689ae40dae5285277bf4a0688a23968c889cd27621b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d757b5380346417adc3689ae40dae5285277bf4a0688a23968c889cd27621b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d757b5380346417adc3689ae40dae5285277bf4a0688a23968c889cd27621b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d757b5380346417adc3689ae40dae5285277bf4a0688a23968c889cd27621b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:05 compute-0 podman[84588]: 2025-11-25 09:33:05.034185385 +0000 UTC m=+0.079544538 container init 5c5e6801eac2309424db9a32749a4a8b509ce2d7ea837b2ef79f257c3f0242bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:05 compute-0 podman[84588]: 2025-11-25 09:33:05.040304795 +0000 UTC m=+0.085663928 container start 5c5e6801eac2309424db9a32749a4a8b509ce2d7ea837b2ef79f257c3f0242bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:05 compute-0 podman[84588]: 2025-11-25 09:33:05.043130995 +0000 UTC m=+0.088490138 container attach 5c5e6801eac2309424db9a32749a4a8b509ce2d7ea837b2ef79f257c3f0242bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:05 compute-0 podman[84588]: 2025-11-25 09:33:04.970372052 +0000 UTC m=+0.015731215 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 25 09:33:05 compute-0 happy_leakey[84601]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:33:05 compute-0 happy_leakey[84601]: --> All data devices are unavailable
Nov 25 09:33:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2513898650' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Nov 25 09:33:05 compute-0 optimistic_kapitsa[84443]: pool 'vms' created
Nov 25 09:33:05 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Nov 25 09:33:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2513898650' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:05 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:05 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:05 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:05 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:05 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:05 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:05 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:05 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:05 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:05 compute-0 ceph-mon[74207]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:05 compute-0 systemd[1]: libpod-347ceb72c6d1e0463b2fd3fb65b4f3c8c0d236ecca0edcc8799b888d0776caeb.scope: Deactivated successfully.
Nov 25 09:33:05 compute-0 podman[84431]: 2025-11-25 09:33:05.295403281 +0000 UTC m=+1.286620169 container died 347ceb72c6d1e0463b2fd3fb65b4f3c8c0d236ecca0edcc8799b888d0776caeb (image=quay.io/ceph/ceph:v19, name=optimistic_kapitsa, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:33:05 compute-0 systemd[1]: libpod-5c5e6801eac2309424db9a32749a4a8b509ce2d7ea837b2ef79f257c3f0242bc.scope: Deactivated successfully.
Nov 25 09:33:05 compute-0 conmon[84601]: conmon 5c5e6801eac2309424db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c5e6801eac2309424db9a32749a4a8b509ce2d7ea837b2ef79f257c3f0242bc.scope/container/memory.events
Nov 25 09:33:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4890a804a65005809f4c9801b0d4722ebee813a24b52089781e4750e104cced9-merged.mount: Deactivated successfully.
Nov 25 09:33:05 compute-0 podman[84431]: 2025-11-25 09:33:05.315538143 +0000 UTC m=+1.306755022 container remove 347ceb72c6d1e0463b2fd3fb65b4f3c8c0d236ecca0edcc8799b888d0776caeb (image=quay.io/ceph/ceph:v19, name=optimistic_kapitsa, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:05 compute-0 sudo[84428]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:05 compute-0 systemd[1]: libpod-conmon-347ceb72c6d1e0463b2fd3fb65b4f3c8c0d236ecca0edcc8799b888d0776caeb.scope: Deactivated successfully.
Nov 25 09:33:05 compute-0 podman[84620]: 2025-11-25 09:33:05.342556414 +0000 UTC m=+0.026117104 container died 5c5e6801eac2309424db9a32749a4a8b509ce2d7ea837b2ef79f257c3f0242bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_leakey, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d757b5380346417adc3689ae40dae5285277bf4a0688a23968c889cd27621b4-merged.mount: Deactivated successfully.
Nov 25 09:33:05 compute-0 podman[84620]: 2025-11-25 09:33:05.360506935 +0000 UTC m=+0.044067625 container remove 5c5e6801eac2309424db9a32749a4a8b509ce2d7ea837b2ef79f257c3f0242bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:05 compute-0 systemd[1]: libpod-conmon-5c5e6801eac2309424db9a32749a4a8b509ce2d7ea837b2ef79f257c3f0242bc.scope: Deactivated successfully.
Nov 25 09:33:05 compute-0 sudo[84494]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:05 compute-0 sudo[84675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sowextyrfblbmetlfqvzmtyjoadrfjsj ; /usr/bin/python3'
Nov 25 09:33:05 compute-0 sudo[84675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:05 compute-0 sudo[84652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:05 compute-0 sudo[84652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:05 compute-0 sudo[84652]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:05 compute-0 sudo[84691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:33:05 compute-0 sudo[84691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:05 compute-0 python3[84688]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:05 compute-0 podman[84716]: 2025-11-25 09:33:05.576921851 +0000 UTC m=+0.029195780 container create 352f40de6186322cf2d6108a32236a70fcd05e2961a3aea9e47009cfd6a45337 (image=quay.io/ceph/ceph:v19, name=youthful_bose, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:33:05 compute-0 systemd[1]: Started libpod-conmon-352f40de6186322cf2d6108a32236a70fcd05e2961a3aea9e47009cfd6a45337.scope.
Nov 25 09:33:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1e8c4da68bbe82db7e9d47e22ac7b8bf72979f9cdfd8888fdbd81263ede39e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1e8c4da68bbe82db7e9d47e22ac7b8bf72979f9cdfd8888fdbd81263ede39e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:05 compute-0 podman[84716]: 2025-11-25 09:33:05.639517393 +0000 UTC m=+0.091791322 container init 352f40de6186322cf2d6108a32236a70fcd05e2961a3aea9e47009cfd6a45337 (image=quay.io/ceph/ceph:v19, name=youthful_bose, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:05 compute-0 podman[84716]: 2025-11-25 09:33:05.64700606 +0000 UTC m=+0.099279979 container start 352f40de6186322cf2d6108a32236a70fcd05e2961a3aea9e47009cfd6a45337 (image=quay.io/ceph/ceph:v19, name=youthful_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:33:05 compute-0 podman[84716]: 2025-11-25 09:33:05.652921747 +0000 UTC m=+0.105195686 container attach 352f40de6186322cf2d6108a32236a70fcd05e2961a3aea9e47009cfd6a45337 (image=quay.io/ceph/ceph:v19, name=youthful_bose, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:33:05 compute-0 podman[84716]: 2025-11-25 09:33:05.564636785 +0000 UTC m=+0.016910725 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "836b14f9-a1aa-4fbf-bd6d-42374c72028e"} v 0)
Nov 25 09:33:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/4258346990' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "836b14f9-a1aa-4fbf-bd6d-42374c72028e"}]: dispatch
Nov 25 09:33:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 25 09:33:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/4258346990' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "836b14f9-a1aa-4fbf-bd6d-42374c72028e"}]': finished
Nov 25 09:33:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Nov 25 09:33:05 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Nov 25 09:33:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:05 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:05 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:05 compute-0 podman[84763]: 2025-11-25 09:33:05.76731801 +0000 UTC m=+0.031917202 container create 1431f3ef99c92d6dcc90ee5ca4d6c9a26fd51a0db158a6f523e39e590092575d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:33:05 compute-0 systemd[1]: Started libpod-conmon-1431f3ef99c92d6dcc90ee5ca4d6c9a26fd51a0db158a6f523e39e590092575d.scope.
Nov 25 09:33:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:05 compute-0 podman[84763]: 2025-11-25 09:33:05.81636677 +0000 UTC m=+0.080965963 container init 1431f3ef99c92d6dcc90ee5ca4d6c9a26fd51a0db158a6f523e39e590092575d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cori, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 25 09:33:05 compute-0 podman[84763]: 2025-11-25 09:33:05.820320704 +0000 UTC m=+0.084919897 container start 1431f3ef99c92d6dcc90ee5ca4d6c9a26fd51a0db158a6f523e39e590092575d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:05 compute-0 podman[84763]: 2025-11-25 09:33:05.822419314 +0000 UTC m=+0.087018507 container attach 1431f3ef99c92d6dcc90ee5ca4d6c9a26fd51a0db158a6f523e39e590092575d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:05 compute-0 heuristic_cori[84793]: 167 167
Nov 25 09:33:05 compute-0 systemd[1]: libpod-1431f3ef99c92d6dcc90ee5ca4d6c9a26fd51a0db158a6f523e39e590092575d.scope: Deactivated successfully.
Nov 25 09:33:05 compute-0 podman[84763]: 2025-11-25 09:33:05.824179257 +0000 UTC m=+0.088778451 container died 1431f3ef99c92d6dcc90ee5ca4d6c9a26fd51a0db158a6f523e39e590092575d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cori, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-42ce84438b104c27a3ceba7232921f45177bcf04302ece42030703c78e43eadd-merged.mount: Deactivated successfully.
Nov 25 09:33:05 compute-0 podman[84763]: 2025-11-25 09:33:05.842938991 +0000 UTC m=+0.107538184 container remove 1431f3ef99c92d6dcc90ee5ca4d6c9a26fd51a0db158a6f523e39e590092575d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cori, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:05 compute-0 podman[84763]: 2025-11-25 09:33:05.755834703 +0000 UTC m=+0.020433917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:05 compute-0 systemd[1]: libpod-conmon-1431f3ef99c92d6dcc90ee5ca4d6c9a26fd51a0db158a6f523e39e590092575d.scope: Deactivated successfully.
Nov 25 09:33:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 25 09:33:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3191460124' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:05 compute-0 podman[84814]: 2025-11-25 09:33:05.954146252 +0000 UTC m=+0.025628105 container create 9fa4db175f50e2f435c09f85d5f28a0b631858406e8acadd8d456d10871054e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:33:05 compute-0 systemd[1]: Started libpod-conmon-9fa4db175f50e2f435c09f85d5f28a0b631858406e8acadd8d456d10871054e2.scope.
Nov 25 09:33:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ee52f85615185f66ab0786beabaf26b80bea0edb6c2d83241a560c0a2897d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ee52f85615185f66ab0786beabaf26b80bea0edb6c2d83241a560c0a2897d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ee52f85615185f66ab0786beabaf26b80bea0edb6c2d83241a560c0a2897d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877ee52f85615185f66ab0786beabaf26b80bea0edb6c2d83241a560c0a2897d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:06 compute-0 podman[84814]: 2025-11-25 09:33:06.010778709 +0000 UTC m=+0.082260561 container init 9fa4db175f50e2f435c09f85d5f28a0b631858406e8acadd8d456d10871054e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_driscoll, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:06 compute-0 podman[84814]: 2025-11-25 09:33:06.017408087 +0000 UTC m=+0.088889941 container start 9fa4db175f50e2f435c09f85d5f28a0b631858406e8acadd8d456d10871054e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_driscoll, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:33:06 compute-0 podman[84814]: 2025-11-25 09:33:06.018574132 +0000 UTC m=+0.090055986 container attach 9fa4db175f50e2f435c09f85d5f28a0b631858406e8acadd8d456d10871054e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_driscoll, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 09:33:06 compute-0 podman[84814]: 2025-11-25 09:33:05.943958815 +0000 UTC m=+0.015440687 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]: {
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:     "1": [
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:         {
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             "devices": [
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "/dev/loop3"
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             ],
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             "lv_name": "ceph_lv0",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             "lv_size": "21470642176",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             "name": "ceph_lv0",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             "tags": {
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.cluster_name": "ceph",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.crush_device_class": "",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.encrypted": "0",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.osd_id": "1",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.type": "block",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.vdo": "0",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:                 "ceph.with_tpm": "0"
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             },
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             "type": "block",
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:             "vg_name": "ceph_vg0"
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:         }
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]:     ]
Nov 25 09:33:06 compute-0 wizardly_driscoll[84830]: }
Nov 25 09:33:06 compute-0 systemd[1]: libpod-9fa4db175f50e2f435c09f85d5f28a0b631858406e8acadd8d456d10871054e2.scope: Deactivated successfully.
Nov 25 09:33:06 compute-0 podman[84814]: 2025-11-25 09:33:06.248164422 +0000 UTC m=+0.319646275 container died 9fa4db175f50e2f435c09f85d5f28a0b631858406e8acadd8d456d10871054e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:33:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-877ee52f85615185f66ab0786beabaf26b80bea0edb6c2d83241a560c0a2897d-merged.mount: Deactivated successfully.
Nov 25 09:33:06 compute-0 podman[84814]: 2025-11-25 09:33:06.268868306 +0000 UTC m=+0.340350158 container remove 9fa4db175f50e2f435c09f85d5f28a0b631858406e8acadd8d456d10871054e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_driscoll, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 25 09:33:06 compute-0 systemd[1]: libpod-conmon-9fa4db175f50e2f435c09f85d5f28a0b631858406e8acadd8d456d10871054e2.scope: Deactivated successfully.
Nov 25 09:33:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2513898650' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:06 compute-0 ceph-mon[74207]: osdmap e12: 2 total, 2 up, 2 in
Nov 25 09:33:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4258346990' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "836b14f9-a1aa-4fbf-bd6d-42374c72028e"}]: dispatch
Nov 25 09:33:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4258346990' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "836b14f9-a1aa-4fbf-bd6d-42374c72028e"}]': finished
Nov 25 09:33:06 compute-0 ceph-mon[74207]: osdmap e13: 3 total, 2 up, 3 in
Nov 25 09:33:06 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3191460124' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4042166828' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 09:33:06 compute-0 sudo[84691]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:06 compute-0 sudo[84849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:06 compute-0 sudo[84849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:06 compute-0 sudo[84849]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:06 compute-0 sudo[84874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:33:06 compute-0 sudo[84874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:06 compute-0 podman[84929]: 2025-11-25 09:33:06.650346192 +0000 UTC m=+0.026477252 container create 3fd3fb5c5f7162328931b7b5c6de766c2cb7c8f7694eb508f7a81458f726e2e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_tesla, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:33:06 compute-0 systemd[1]: Started libpod-conmon-3fd3fb5c5f7162328931b7b5c6de766c2cb7c8f7694eb508f7a81458f726e2e9.scope.
Nov 25 09:33:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:06 compute-0 podman[84929]: 2025-11-25 09:33:06.698121716 +0000 UTC m=+0.074252796 container init 3fd3fb5c5f7162328931b7b5c6de766c2cb7c8f7694eb508f7a81458f726e2e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_tesla, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:06 compute-0 podman[84929]: 2025-11-25 09:33:06.702121465 +0000 UTC m=+0.078252525 container start 3fd3fb5c5f7162328931b7b5c6de766c2cb7c8f7694eb508f7a81458f726e2e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:06 compute-0 priceless_tesla[84941]: 167 167
Nov 25 09:33:06 compute-0 systemd[1]: libpod-3fd3fb5c5f7162328931b7b5c6de766c2cb7c8f7694eb508f7a81458f726e2e9.scope: Deactivated successfully.
Nov 25 09:33:06 compute-0 podman[84929]: 2025-11-25 09:33:06.706217086 +0000 UTC m=+0.082348166 container attach 3fd3fb5c5f7162328931b7b5c6de766c2cb7c8f7694eb508f7a81458f726e2e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_tesla, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:06 compute-0 podman[84929]: 2025-11-25 09:33:06.706587322 +0000 UTC m=+0.082718382 container died 3fd3fb5c5f7162328931b7b5c6de766c2cb7c8f7694eb508f7a81458f726e2e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 25 09:33:06 compute-0 podman[84929]: 2025-11-25 09:33:06.721868428 +0000 UTC m=+0.097999489 container remove 3fd3fb5c5f7162328931b7b5c6de766c2cb7c8f7694eb508f7a81458f726e2e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:33:06 compute-0 podman[84929]: 2025-11-25 09:33:06.639182778 +0000 UTC m=+0.015313848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:06 compute-0 systemd[1]: libpod-conmon-3fd3fb5c5f7162328931b7b5c6de766c2cb7c8f7694eb508f7a81458f726e2e9.scope: Deactivated successfully.
Nov 25 09:33:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 25 09:33:06 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3191460124' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 25 09:33:06 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 25 09:33:06 compute-0 youthful_bose[84728]: pool 'volumes' created
Nov 25 09:33:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:06 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:06 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:06 compute-0 systemd[1]: libpod-352f40de6186322cf2d6108a32236a70fcd05e2961a3aea9e47009cfd6a45337.scope: Deactivated successfully.
Nov 25 09:33:06 compute-0 podman[84716]: 2025-11-25 09:33:06.759121203 +0000 UTC m=+1.211395113 container died 352f40de6186322cf2d6108a32236a70fcd05e2961a3aea9e47009cfd6a45337 (image=quay.io/ceph/ceph:v19, name=youthful_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Nov 25 09:33:06 compute-0 podman[84716]: 2025-11-25 09:33:06.776225271 +0000 UTC m=+1.228499191 container remove 352f40de6186322cf2d6108a32236a70fcd05e2961a3aea9e47009cfd6a45337 (image=quay.io/ceph/ceph:v19, name=youthful_bose, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:06 compute-0 sudo[84675]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:06 compute-0 systemd[1]: libpod-conmon-352f40de6186322cf2d6108a32236a70fcd05e2961a3aea9e47009cfd6a45337.scope: Deactivated successfully.
Nov 25 09:33:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-756b44c366a8ee0d56e9f16a0cad5c31abfe7b2ed69e08c8102bf17677ba3569-merged.mount: Deactivated successfully.
Nov 25 09:33:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa1e8c4da68bbe82db7e9d47e22ac7b8bf72979f9cdfd8888fdbd81263ede39e-merged.mount: Deactivated successfully.
Nov 25 09:33:06 compute-0 podman[84976]: 2025-11-25 09:33:06.845323173 +0000 UTC m=+0.027042876 container create 649c127f667a7c5b191e90c9794d80d95d79cb24a250d7bc07bd75aa47a21ae1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:06 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v54: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:06 compute-0 systemd[1]: Started libpod-conmon-649c127f667a7c5b191e90c9794d80d95d79cb24a250d7bc07bd75aa47a21ae1.scope.
Nov 25 09:33:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef51bfe28913b1f0116722da4430eced56755ad76ac385912a44a8b16e3e5479/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef51bfe28913b1f0116722da4430eced56755ad76ac385912a44a8b16e3e5479/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef51bfe28913b1f0116722da4430eced56755ad76ac385912a44a8b16e3e5479/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef51bfe28913b1f0116722da4430eced56755ad76ac385912a44a8b16e3e5479/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:06 compute-0 sudo[85015]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttoriuxzolyybenwfourdocwkykxlutj ; /usr/bin/python3'
Nov 25 09:33:06 compute-0 sudo[85015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:06 compute-0 podman[84976]: 2025-11-25 09:33:06.905341614 +0000 UTC m=+0.087061327 container init 649c127f667a7c5b191e90c9794d80d95d79cb24a250d7bc07bd75aa47a21ae1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_cohen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:06 compute-0 podman[84976]: 2025-11-25 09:33:06.91003008 +0000 UTC m=+0.091749773 container start 649c127f667a7c5b191e90c9794d80d95d79cb24a250d7bc07bd75aa47a21ae1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_cohen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:06 compute-0 podman[84976]: 2025-11-25 09:33:06.912752024 +0000 UTC m=+0.094471737 container attach 649c127f667a7c5b191e90c9794d80d95d79cb24a250d7bc07bd75aa47a21ae1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:06 compute-0 podman[84976]: 2025-11-25 09:33:06.833838074 +0000 UTC m=+0.015557777 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:07 compute-0 python3[85017]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:07 compute-0 podman[85020]: 2025-11-25 09:33:07.04852107 +0000 UTC m=+0.026989537 container create 63f796d3c0d232fe45f19078f854a64fe249b31afbcc786968630435eb3702df (image=quay.io/ceph/ceph:v19, name=xenodochial_khayyam, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:07 compute-0 systemd[1]: Started libpod-conmon-63f796d3c0d232fe45f19078f854a64fe249b31afbcc786968630435eb3702df.scope.
Nov 25 09:33:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b1d09ef65fea4bb8ea8933050b5b205a979c7457c88aa5c41c068aba62cbbf7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b1d09ef65fea4bb8ea8933050b5b205a979c7457c88aa5c41c068aba62cbbf7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:07 compute-0 podman[85020]: 2025-11-25 09:33:07.100306041 +0000 UTC m=+0.078774509 container init 63f796d3c0d232fe45f19078f854a64fe249b31afbcc786968630435eb3702df (image=quay.io/ceph/ceph:v19, name=xenodochial_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:07 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 5 completed events
Nov 25 09:33:07 compute-0 podman[85020]: 2025-11-25 09:33:07.105287038 +0000 UTC m=+0.083755505 container start 63f796d3c0d232fe45f19078f854a64fe249b31afbcc786968630435eb3702df (image=quay.io/ceph/ceph:v19, name=xenodochial_khayyam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:33:07 compute-0 podman[85020]: 2025-11-25 09:33:07.106705569 +0000 UTC m=+0.085174036 container attach 63f796d3c0d232fe45f19078f854a64fe249b31afbcc786968630435eb3702df (image=quay.io/ceph/ceph:v19, name=xenodochial_khayyam, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:07 compute-0 podman[85020]: 2025-11-25 09:33:07.037924312 +0000 UTC m=+0.016392800 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 25 09:33:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4269819066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:07 compute-0 lvm[85126]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:33:07 compute-0 lvm[85126]: VG ceph_vg0 finished
Nov 25 09:33:07 compute-0 sharp_cohen[85002]: {}
Nov 25 09:33:07 compute-0 lvm[85131]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:33:07 compute-0 lvm[85131]: VG ceph_vg0 finished
Nov 25 09:33:07 compute-0 systemd[1]: libpod-649c127f667a7c5b191e90c9794d80d95d79cb24a250d7bc07bd75aa47a21ae1.scope: Deactivated successfully.
Nov 25 09:33:07 compute-0 podman[85132]: 2025-11-25 09:33:07.455658589 +0000 UTC m=+0.016300466 container died 649c127f667a7c5b191e90c9794d80d95d79cb24a250d7bc07bd75aa47a21ae1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 09:33:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef51bfe28913b1f0116722da4430eced56755ad76ac385912a44a8b16e3e5479-merged.mount: Deactivated successfully.
Nov 25 09:33:07 compute-0 podman[85132]: 2025-11-25 09:33:07.477337729 +0000 UTC m=+0.037979605 container remove 649c127f667a7c5b191e90c9794d80d95d79cb24a250d7bc07bd75aa47a21ae1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_cohen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:07 compute-0 systemd[1]: libpod-conmon-649c127f667a7c5b191e90c9794d80d95d79cb24a250d7bc07bd75aa47a21ae1.scope: Deactivated successfully.
Nov 25 09:33:07 compute-0 sudo[84874]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:07 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:33:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:07 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:33:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 25 09:33:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4269819066' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 25 09:33:07 compute-0 xenodochial_khayyam[85040]: pool 'backups' created
Nov 25 09:33:07 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3191460124' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:07 compute-0 ceph-mon[74207]: osdmap e14: 3 total, 2 up, 3 in
Nov 25 09:33:07 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:07 compute-0 ceph-mon[74207]: pgmap v54: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:07 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:07 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4269819066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:07 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:07 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:07 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 25 09:33:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:07 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:07 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:07 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 15 pg[4.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:33:07 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:33:07 compute-0 systemd[1]: libpod-63f796d3c0d232fe45f19078f854a64fe249b31afbcc786968630435eb3702df.scope: Deactivated successfully.
Nov 25 09:33:07 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.flybft started
Nov 25 09:33:07 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mgr.compute-2.flybft 192.168.122.102:0/1293543172; not ready for session (expect reconnect)
Nov 25 09:33:07 compute-0 podman[85146]: 2025-11-25 09:33:07.789642663 +0000 UTC m=+0.016280478 container died 63f796d3c0d232fe45f19078f854a64fe249b31afbcc786968630435eb3702df (image=quay.io/ceph/ceph:v19, name=xenodochial_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b1d09ef65fea4bb8ea8933050b5b205a979c7457c88aa5c41c068aba62cbbf7-merged.mount: Deactivated successfully.
Nov 25 09:33:07 compute-0 podman[85146]: 2025-11-25 09:33:07.8057055 +0000 UTC m=+0.032343294 container remove 63f796d3c0d232fe45f19078f854a64fe249b31afbcc786968630435eb3702df (image=quay.io/ceph/ceph:v19, name=xenodochial_khayyam, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:07 compute-0 systemd[1]: libpod-conmon-63f796d3c0d232fe45f19078f854a64fe249b31afbcc786968630435eb3702df.scope: Deactivated successfully.
Nov 25 09:33:07 compute-0 sudo[85015]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:07 compute-0 sudo[85180]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqruwnvglunnjqbvikfutujwzmbqvrqb ; /usr/bin/python3'
Nov 25 09:33:07 compute-0 sudo[85180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:08 compute-0 python3[85182]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:08 compute-0 podman[85183]: 2025-11-25 09:33:08.065051382 +0000 UTC m=+0.027668807 container create 496a7826bb127c73c128a5738241578ca663062fa14db671125dd51f48f9e65d (image=quay.io/ceph/ceph:v19, name=thirsty_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:08 compute-0 systemd[1]: Started libpod-conmon-496a7826bb127c73c128a5738241578ca663062fa14db671125dd51f48f9e65d.scope.
Nov 25 09:33:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6b47d5e117f3762a8b58677fcc13bed05a53ae22caa539d65e30a8a1ddf9c1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6b47d5e117f3762a8b58677fcc13bed05a53ae22caa539d65e30a8a1ddf9c1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:08 compute-0 podman[85183]: 2025-11-25 09:33:08.123637776 +0000 UTC m=+0.086255211 container init 496a7826bb127c73c128a5738241578ca663062fa14db671125dd51f48f9e65d (image=quay.io/ceph/ceph:v19, name=thirsty_elion, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:08 compute-0 podman[85183]: 2025-11-25 09:33:08.127855787 +0000 UTC m=+0.090473212 container start 496a7826bb127c73c128a5738241578ca663062fa14db671125dd51f48f9e65d (image=quay.io/ceph/ceph:v19, name=thirsty_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:33:08 compute-0 podman[85183]: 2025-11-25 09:33:08.129079831 +0000 UTC m=+0.091697266 container attach 496a7826bb127c73c128a5738241578ca663062fa14db671125dd51f48f9e65d (image=quay.io/ceph/ceph:v19, name=thirsty_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:33:08 compute-0 podman[85183]: 2025-11-25 09:33:08.054090929 +0000 UTC m=+0.016708374 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 25 09:33:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2992947115' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:08 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.plffrn started
Nov 25 09:33:08 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from mgr.compute-1.plffrn 192.168.122.101:0/4020390777; not ready for session (expect reconnect)
Nov 25 09:33:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 25 09:33:08 compute-0 ceph-mon[74207]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:33:08 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4269819066' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:08 compute-0 ceph-mon[74207]: osdmap e15: 3 total, 2 up, 3 in
Nov 25 09:33:08 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:08 compute-0 ceph-mon[74207]: Standby manager daemon compute-2.flybft started
Nov 25 09:33:08 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2992947115' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:08 compute-0 ceph-mon[74207]: Standby manager daemon compute-1.plffrn started
Nov 25 09:33:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2992947115' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 25 09:33:08 compute-0 thirsty_elion[85197]: pool 'images' created
Nov 25 09:33:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 16 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:33:08 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 25 09:33:08 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.zcfgby(active, since 95s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"} v 0)
Nov 25 09:33:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"}]: dispatch
Nov 25 09:33:08 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"} v 0)
Nov 25 09:33:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"}]: dispatch
Nov 25 09:33:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 16 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:33:08 compute-0 systemd[1]: libpod-496a7826bb127c73c128a5738241578ca663062fa14db671125dd51f48f9e65d.scope: Deactivated successfully.
Nov 25 09:33:08 compute-0 podman[85183]: 2025-11-25 09:33:08.781562597 +0000 UTC m=+0.744180022 container died 496a7826bb127c73c128a5738241578ca663062fa14db671125dd51f48f9e65d (image=quay.io/ceph/ceph:v19, name=thirsty_elion, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d6b47d5e117f3762a8b58677fcc13bed05a53ae22caa539d65e30a8a1ddf9c1-merged.mount: Deactivated successfully.
Nov 25 09:33:08 compute-0 podman[85183]: 2025-11-25 09:33:08.800268991 +0000 UTC m=+0.762886416 container remove 496a7826bb127c73c128a5738241578ca663062fa14db671125dd51f48f9e65d (image=quay.io/ceph/ceph:v19, name=thirsty_elion, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:33:08 compute-0 sudo[85180]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:08 compute-0 systemd[1]: libpod-conmon-496a7826bb127c73c128a5738241578ca663062fa14db671125dd51f48f9e65d.scope: Deactivated successfully.
Nov 25 09:33:08 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v57: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:08 compute-0 sudo[85258]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rykqamoqfgztwwtfthiflkuuhotqnlnw ; /usr/bin/python3'
Nov 25 09:33:08 compute-0 sudo[85258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:09 compute-0 python3[85260]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:09 compute-0 podman[85261]: 2025-11-25 09:33:09.057932776 +0000 UTC m=+0.027494547 container create 3cb85f51786015fc4ff0760c18577ed8c2e9c9ac510e6018486ee6458d8f838c (image=quay.io/ceph/ceph:v19, name=gallant_meitner, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:09 compute-0 systemd[1]: Started libpod-conmon-3cb85f51786015fc4ff0760c18577ed8c2e9c9ac510e6018486ee6458d8f838c.scope.
Nov 25 09:33:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5e022c3bb5633f3752b5a7e21d264c7a29627361474ea9c88d05c91f9997ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5e022c3bb5633f3752b5a7e21d264c7a29627361474ea9c88d05c91f9997ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:09 compute-0 podman[85261]: 2025-11-25 09:33:09.097642716 +0000 UTC m=+0.067204487 container init 3cb85f51786015fc4ff0760c18577ed8c2e9c9ac510e6018486ee6458d8f838c (image=quay.io/ceph/ceph:v19, name=gallant_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:09 compute-0 podman[85261]: 2025-11-25 09:33:09.101390232 +0000 UTC m=+0.070952003 container start 3cb85f51786015fc4ff0760c18577ed8c2e9c9ac510e6018486ee6458d8f838c (image=quay.io/ceph/ceph:v19, name=gallant_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:33:09 compute-0 podman[85261]: 2025-11-25 09:33:09.102437243 +0000 UTC m=+0.071999014 container attach 3cb85f51786015fc4ff0760c18577ed8c2e9c9ac510e6018486ee6458d8f838c (image=quay.io/ceph/ceph:v19, name=gallant_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 25 09:33:09 compute-0 podman[85261]: 2025-11-25 09:33:09.046219917 +0000 UTC m=+0.015781698 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 25 09:33:09 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2175287734' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 25 09:33:09 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2175287734' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Nov 25 09:33:09 compute-0 gallant_meitner[85273]: pool 'cephfs.cephfs.meta' created
Nov 25 09:33:09 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 17 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:33:09 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Nov 25 09:33:09 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 17 pg[6.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:33:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:09 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:09 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2992947115' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:09 compute-0 ceph-mon[74207]: osdmap e16: 3 total, 2 up, 3 in
Nov 25 09:33:09 compute-0 ceph-mon[74207]: mgrmap e10: compute-0.zcfgby(active, since 95s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:09 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:09 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"}]: dispatch
Nov 25 09:33:09 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"}]: dispatch
Nov 25 09:33:09 compute-0 ceph-mon[74207]: pgmap v57: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2175287734' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:09 compute-0 systemd[1]: libpod-3cb85f51786015fc4ff0760c18577ed8c2e9c9ac510e6018486ee6458d8f838c.scope: Deactivated successfully.
Nov 25 09:33:09 compute-0 podman[85300]: 2025-11-25 09:33:09.808604215 +0000 UTC m=+0.016385636 container died 3cb85f51786015fc4ff0760c18577ed8c2e9c9ac510e6018486ee6458d8f838c (image=quay.io/ceph/ceph:v19, name=gallant_meitner, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f5e022c3bb5633f3752b5a7e21d264c7a29627361474ea9c88d05c91f9997ec-merged.mount: Deactivated successfully.
Nov 25 09:33:09 compute-0 podman[85300]: 2025-11-25 09:33:09.825288322 +0000 UTC m=+0.033069743 container remove 3cb85f51786015fc4ff0760c18577ed8c2e9c9ac510e6018486ee6458d8f838c (image=quay.io/ceph/ceph:v19, name=gallant_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 09:33:09 compute-0 systemd[1]: libpod-conmon-3cb85f51786015fc4ff0760c18577ed8c2e9c9ac510e6018486ee6458d8f838c.scope: Deactivated successfully.
Nov 25 09:33:09 compute-0 sudo[85258]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:09 compute-0 sudo[85334]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvtmdnrjltejgmexchftidnkmlrztriq ; /usr/bin/python3'
Nov 25 09:33:09 compute-0 sudo[85334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Nov 25 09:33:10 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 09:33:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:10 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:10 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Nov 25 09:33:10 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Nov 25 09:33:10 compute-0 python3[85336]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:10 compute-0 podman[85337]: 2025-11-25 09:33:10.074310028 +0000 UTC m=+0.027437121 container create a46e495ddbf3166fb4ed37335f3b79203598b7e616aad6b9936804065b70bf97 (image=quay.io/ceph/ceph:v19, name=friendly_noether, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Nov 25 09:33:10 compute-0 systemd[1]: Started libpod-conmon-a46e495ddbf3166fb4ed37335f3b79203598b7e616aad6b9936804065b70bf97.scope.
Nov 25 09:33:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867fe411cf86bc0fcbc80c63ad39be31a87301fb48e69d8531cbe3c63027fa8e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867fe411cf86bc0fcbc80c63ad39be31a87301fb48e69d8531cbe3c63027fa8e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:10 compute-0 podman[85337]: 2025-11-25 09:33:10.116834979 +0000 UTC m=+0.069962071 container init a46e495ddbf3166fb4ed37335f3b79203598b7e616aad6b9936804065b70bf97 (image=quay.io/ceph/ceph:v19, name=friendly_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:33:10 compute-0 podman[85337]: 2025-11-25 09:33:10.121054482 +0000 UTC m=+0.074181574 container start a46e495ddbf3166fb4ed37335f3b79203598b7e616aad6b9936804065b70bf97 (image=quay.io/ceph/ceph:v19, name=friendly_noether, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:33:10 compute-0 podman[85337]: 2025-11-25 09:33:10.122210889 +0000 UTC m=+0.075337991 container attach a46e495ddbf3166fb4ed37335f3b79203598b7e616aad6b9936804065b70bf97 (image=quay.io/ceph/ceph:v19, name=friendly_noether, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:10 compute-0 podman[85337]: 2025-11-25 09:33:10.063738418 +0000 UTC m=+0.016865510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 25 09:33:10 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3148501607' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 25 09:33:10 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3148501607' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Nov 25 09:33:10 compute-0 friendly_noether[85350]: pool 'cephfs.cephfs.data' created
Nov 25 09:33:10 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Nov 25 09:33:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:10 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:10 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:10 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 18 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:33:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2175287734' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:10 compute-0 ceph-mon[74207]: osdmap e17: 3 total, 2 up, 3 in
Nov 25 09:33:10 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:10 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 09:33:10 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3148501607' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 09:33:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3148501607' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 09:33:10 compute-0 systemd[1]: libpod-a46e495ddbf3166fb4ed37335f3b79203598b7e616aad6b9936804065b70bf97.scope: Deactivated successfully.
Nov 25 09:33:10 compute-0 podman[85337]: 2025-11-25 09:33:10.786804343 +0000 UTC m=+0.739931434 container died a46e495ddbf3166fb4ed37335f3b79203598b7e616aad6b9936804065b70bf97 (image=quay.io/ceph/ceph:v19, name=friendly_noether, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-867fe411cf86bc0fcbc80c63ad39be31a87301fb48e69d8531cbe3c63027fa8e-merged.mount: Deactivated successfully.
Nov 25 09:33:10 compute-0 podman[85337]: 2025-11-25 09:33:10.801668333 +0000 UTC m=+0.754795425 container remove a46e495ddbf3166fb4ed37335f3b79203598b7e616aad6b9936804065b70bf97 (image=quay.io/ceph/ceph:v19, name=friendly_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 09:33:10 compute-0 systemd[1]: libpod-conmon-a46e495ddbf3166fb4ed37335f3b79203598b7e616aad6b9936804065b70bf97.scope: Deactivated successfully.
Nov 25 09:33:10 compute-0 sudo[85334]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:10 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:10 compute-0 sudo[85409]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icochzqbtttfrpfxhlbobridnxvatycf ; /usr/bin/python3'
Nov 25 09:33:10 compute-0 sudo[85409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:11 compute-0 python3[85411]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:11 compute-0 podman[85412]: 2025-11-25 09:33:11.07402159 +0000 UTC m=+0.023357070 container create 4ae42a11e17ee6f2f6261ea7985c8b4ee67bff990620d35effcbab185e7d4aa9 (image=quay.io/ceph/ceph:v19, name=relaxed_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:11 compute-0 systemd[1]: Started libpod-conmon-4ae42a11e17ee6f2f6261ea7985c8b4ee67bff990620d35effcbab185e7d4aa9.scope.
Nov 25 09:33:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/629b3df3cc7bd6c7f8b2b270c9b0ed850367e9d13c35bdac5eff280f4f6ac2cc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/629b3df3cc7bd6c7f8b2b270c9b0ed850367e9d13c35bdac5eff280f4f6ac2cc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:11 compute-0 podman[85412]: 2025-11-25 09:33:11.118206034 +0000 UTC m=+0.067541514 container init 4ae42a11e17ee6f2f6261ea7985c8b4ee67bff990620d35effcbab185e7d4aa9 (image=quay.io/ceph/ceph:v19, name=relaxed_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 25 09:33:11 compute-0 podman[85412]: 2025-11-25 09:33:11.121847468 +0000 UTC m=+0.071182949 container start 4ae42a11e17ee6f2f6261ea7985c8b4ee67bff990620d35effcbab185e7d4aa9 (image=quay.io/ceph/ceph:v19, name=relaxed_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:33:11 compute-0 podman[85412]: 2025-11-25 09:33:11.122967878 +0000 UTC m=+0.072303358 container attach 4ae42a11e17ee6f2f6261ea7985c8b4ee67bff990620d35effcbab185e7d4aa9 (image=quay.io/ceph/ceph:v19, name=relaxed_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 09:33:11 compute-0 podman[85412]: 2025-11-25 09:33:11.06481052 +0000 UTC m=+0.014146020 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Nov 25 09:33:11 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2930438515' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 09:33:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 25 09:33:11 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2930438515' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 09:33:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Nov 25 09:33:11 compute-0 relaxed_dewdney[85425]: enabled application 'rbd' on pool 'vms'
Nov 25 09:33:11 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Nov 25 09:33:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:11 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:11 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:11 compute-0 ceph-mon[74207]: Deploying daemon osd.2 on compute-2
Nov 25 09:33:11 compute-0 ceph-mon[74207]: osdmap e18: 3 total, 2 up, 3 in
Nov 25 09:33:11 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:11 compute-0 ceph-mon[74207]: pgmap v60: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2930438515' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 09:33:11 compute-0 systemd[1]: libpod-4ae42a11e17ee6f2f6261ea7985c8b4ee67bff990620d35effcbab185e7d4aa9.scope: Deactivated successfully.
Nov 25 09:33:11 compute-0 conmon[85425]: conmon 4ae42a11e17ee6f2f626 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ae42a11e17ee6f2f6261ea7985c8b4ee67bff990620d35effcbab185e7d4aa9.scope/container/memory.events
Nov 25 09:33:11 compute-0 podman[85412]: 2025-11-25 09:33:11.8014414 +0000 UTC m=+0.750776890 container died 4ae42a11e17ee6f2f6261ea7985c8b4ee67bff990620d35effcbab185e7d4aa9 (image=quay.io/ceph/ceph:v19, name=relaxed_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-629b3df3cc7bd6c7f8b2b270c9b0ed850367e9d13c35bdac5eff280f4f6ac2cc-merged.mount: Deactivated successfully.
Nov 25 09:33:11 compute-0 podman[85412]: 2025-11-25 09:33:11.818855421 +0000 UTC m=+0.768190901 container remove 4ae42a11e17ee6f2f6261ea7985c8b4ee67bff990620d35effcbab185e7d4aa9 (image=quay.io/ceph/ceph:v19, name=relaxed_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:11 compute-0 systemd[1]: libpod-conmon-4ae42a11e17ee6f2f6261ea7985c8b4ee67bff990620d35effcbab185e7d4aa9.scope: Deactivated successfully.
Nov 25 09:33:11 compute-0 sudo[85409]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:11 compute-0 sudo[85482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veafvemckoutkgfndplbolyoymcxgkip ; /usr/bin/python3'
Nov 25 09:33:11 compute-0 sudo[85482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:12 compute-0 python3[85484]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:12 compute-0 podman[85485]: 2025-11-25 09:33:12.062809218 +0000 UTC m=+0.023982967 container create 3d62ad57a63684da0d153148bd715ec004edf900aea27ddbf01e49c40c4dfbed (image=quay.io/ceph/ceph:v19, name=infallible_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:33:12 compute-0 systemd[1]: Started libpod-conmon-3d62ad57a63684da0d153148bd715ec004edf900aea27ddbf01e49c40c4dfbed.scope.
Nov 25 09:33:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcff404379e4eefc63f0ba1618986afe2e3ff18430b51ae0647a626f79f32b4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcff404379e4eefc63f0ba1618986afe2e3ff18430b51ae0647a626f79f32b4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:12 compute-0 podman[85485]: 2025-11-25 09:33:12.108696267 +0000 UTC m=+0.069870016 container init 3d62ad57a63684da0d153148bd715ec004edf900aea27ddbf01e49c40c4dfbed (image=quay.io/ceph/ceph:v19, name=infallible_murdock, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 25 09:33:12 compute-0 podman[85485]: 2025-11-25 09:33:12.112200423 +0000 UTC m=+0.073374163 container start 3d62ad57a63684da0d153148bd715ec004edf900aea27ddbf01e49c40c4dfbed (image=quay.io/ceph/ceph:v19, name=infallible_murdock, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:12 compute-0 podman[85485]: 2025-11-25 09:33:12.114359017 +0000 UTC m=+0.075532766 container attach 3d62ad57a63684da0d153148bd715ec004edf900aea27ddbf01e49c40c4dfbed (image=quay.io/ceph/ceph:v19, name=infallible_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 09:33:12 compute-0 podman[85485]: 2025-11-25 09:33:12.053102747 +0000 UTC m=+0.014276495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Nov 25 09:33:12 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1942627046' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 09:33:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 25 09:33:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2930438515' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 09:33:12 compute-0 ceph-mon[74207]: osdmap e19: 3 total, 2 up, 3 in
Nov 25 09:33:12 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1942627046' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 09:33:12 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1942627046' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 09:33:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Nov 25 09:33:12 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Nov 25 09:33:12 compute-0 infallible_murdock[85498]: enabled application 'rbd' on pool 'volumes'
Nov 25 09:33:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:12 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:12 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:12 compute-0 systemd[1]: libpod-3d62ad57a63684da0d153148bd715ec004edf900aea27ddbf01e49c40c4dfbed.scope: Deactivated successfully.
Nov 25 09:33:12 compute-0 podman[85485]: 2025-11-25 09:33:12.816474062 +0000 UTC m=+0.777647810 container died 3d62ad57a63684da0d153148bd715ec004edf900aea27ddbf01e49c40c4dfbed (image=quay.io/ceph/ceph:v19, name=infallible_murdock, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fcff404379e4eefc63f0ba1618986afe2e3ff18430b51ae0647a626f79f32b4-merged.mount: Deactivated successfully.
Nov 25 09:33:12 compute-0 podman[85485]: 2025-11-25 09:33:12.832460976 +0000 UTC m=+0.793634726 container remove 3d62ad57a63684da0d153148bd715ec004edf900aea27ddbf01e49c40c4dfbed (image=quay.io/ceph/ceph:v19, name=infallible_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:33:12 compute-0 systemd[1]: libpod-conmon-3d62ad57a63684da0d153148bd715ec004edf900aea27ddbf01e49c40c4dfbed.scope: Deactivated successfully.
Nov 25 09:33:12 compute-0 sudo[85482]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:12 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:12 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:12 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:12 compute-0 sudo[85556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtelkllqkacxwddmnpbzhiglyelirbdb ; /usr/bin/python3'
Nov 25 09:33:12 compute-0 sudo[85556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:13 compute-0 python3[85558]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:13 compute-0 podman[85559]: 2025-11-25 09:33:13.084544488 +0000 UTC m=+0.023162152 container create 9758968a7c7fc9b2d2b98feb03985296c1adf7aac70c7b246e836f175f79103c (image=quay.io/ceph/ceph:v19, name=frosty_lovelace, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:13 compute-0 systemd[1]: Started libpod-conmon-9758968a7c7fc9b2d2b98feb03985296c1adf7aac70c7b246e836f175f79103c.scope.
Nov 25 09:33:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c5dba7f17198be70bcf6c20c4970d08f16cb66521e06166f3c9f316b2bb790/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c5dba7f17198be70bcf6c20c4970d08f16cb66521e06166f3c9f316b2bb790/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:13 compute-0 podman[85559]: 2025-11-25 09:33:13.126773711 +0000 UTC m=+0.065391396 container init 9758968a7c7fc9b2d2b98feb03985296c1adf7aac70c7b246e836f175f79103c (image=quay.io/ceph/ceph:v19, name=frosty_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 25 09:33:13 compute-0 podman[85559]: 2025-11-25 09:33:13.1302315 +0000 UTC m=+0.068849165 container start 9758968a7c7fc9b2d2b98feb03985296c1adf7aac70c7b246e836f175f79103c (image=quay.io/ceph/ceph:v19, name=frosty_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:13 compute-0 podman[85559]: 2025-11-25 09:33:13.131237364 +0000 UTC m=+0.069855029 container attach 9758968a7c7fc9b2d2b98feb03985296c1adf7aac70c7b246e836f175f79103c (image=quay.io/ceph/ceph:v19, name=frosty_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:13 compute-0 podman[85559]: 2025-11-25 09:33:13.075311937 +0000 UTC m=+0.013929623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Nov 25 09:33:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3230090525' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 09:33:13 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:33:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1942627046' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 09:33:13 compute-0 ceph-mon[74207]: osdmap e20: 3 total, 2 up, 3 in
Nov 25 09:33:13 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:13 compute-0 ceph-mon[74207]: pgmap v63: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:13 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:13 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3230090525' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 09:33:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 25 09:33:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3230090525' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 09:33:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Nov 25 09:33:13 compute-0 frosty_lovelace[85571]: enabled application 'rbd' on pool 'backups'
Nov 25 09:33:13 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Nov 25 09:33:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:13 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:13 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:13 compute-0 systemd[1]: libpod-9758968a7c7fc9b2d2b98feb03985296c1adf7aac70c7b246e836f175f79103c.scope: Deactivated successfully.
Nov 25 09:33:13 compute-0 conmon[85571]: conmon 9758968a7c7fc9b2d2b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9758968a7c7fc9b2d2b98feb03985296c1adf7aac70c7b246e836f175f79103c.scope/container/memory.events
Nov 25 09:33:13 compute-0 podman[85596]: 2025-11-25 09:33:13.922111842 +0000 UTC m=+0.014001156 container died 9758968a7c7fc9b2d2b98feb03985296c1adf7aac70c7b246e836f175f79103c (image=quay.io/ceph/ceph:v19, name=frosty_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3c5dba7f17198be70bcf6c20c4970d08f16cb66521e06166f3c9f316b2bb790-merged.mount: Deactivated successfully.
Nov 25 09:33:13 compute-0 podman[85596]: 2025-11-25 09:33:13.937864726 +0000 UTC m=+0.029754030 container remove 9758968a7c7fc9b2d2b98feb03985296c1adf7aac70c7b246e836f175f79103c (image=quay.io/ceph/ceph:v19, name=frosty_lovelace, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:33:13 compute-0 systemd[1]: libpod-conmon-9758968a7c7fc9b2d2b98feb03985296c1adf7aac70c7b246e836f175f79103c.scope: Deactivated successfully.
Nov 25 09:33:13 compute-0 sudo[85556]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:14 compute-0 sudo[85631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnammsyjuxefwsmtchcsoxhtrambwwdm ; /usr/bin/python3'
Nov 25 09:33:14 compute-0 sudo[85631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:14 compute-0 sudo[85632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:33:14 compute-0 sudo[85632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:14 compute-0 sudo[85632]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:14 compute-0 python3[85645]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:14 compute-0 podman[85659]: 2025-11-25 09:33:14.193851535 +0000 UTC m=+0.028247125 container create 1aa17f3458df3b433bea9e26f351f62557dd6cd1a7c1e20396f90d7ecf9d98a1 (image=quay.io/ceph/ceph:v19, name=gracious_solomon, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:33:14 compute-0 sudo[85665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:14 compute-0 sudo[85665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:14 compute-0 sudo[85665]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:14 compute-0 systemd[1]: Started libpod-conmon-1aa17f3458df3b433bea9e26f351f62557dd6cd1a7c1e20396f90d7ecf9d98a1.scope.
Nov 25 09:33:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf5947ffaaf6f05c6c1839bf2940f930416429984099df2f7d1530e90366d64/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf5947ffaaf6f05c6c1839bf2940f930416429984099df2f7d1530e90366d64/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:14 compute-0 podman[85659]: 2025-11-25 09:33:14.23823303 +0000 UTC m=+0.072628620 container init 1aa17f3458df3b433bea9e26f351f62557dd6cd1a7c1e20396f90d7ecf9d98a1 (image=quay.io/ceph/ceph:v19, name=gracious_solomon, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 25 09:33:14 compute-0 podman[85659]: 2025-11-25 09:33:14.241825873 +0000 UTC m=+0.076221453 container start 1aa17f3458df3b433bea9e26f351f62557dd6cd1a7c1e20396f90d7ecf9d98a1 (image=quay.io/ceph/ceph:v19, name=gracious_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:14 compute-0 podman[85659]: 2025-11-25 09:33:14.243864019 +0000 UTC m=+0.078259600 container attach 1aa17f3458df3b433bea9e26f351f62557dd6cd1a7c1e20396f90d7ecf9d98a1 (image=quay.io/ceph/ceph:v19, name=gracious_solomon, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:33:14 compute-0 sudo[85696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:33:14 compute-0 sudo[85696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:14 compute-0 podman[85659]: 2025-11-25 09:33:14.180657338 +0000 UTC m=+0.015052937 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Nov 25 09:33:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/146186615' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 09:33:14 compute-0 sudo[85696]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:14 compute-0 ceph-mon[74207]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:33:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3230090525' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 09:33:14 compute-0 ceph-mon[74207]: osdmap e21: 3 total, 2 up, 3 in
Nov 25 09:33:14 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:14 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:14 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/146186615' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 09:33:14 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 25 09:33:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/146186615' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 09:33:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Nov 25 09:33:15 compute-0 gracious_solomon[85697]: enabled application 'rbd' on pool 'images'
Nov 25 09:33:15 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Nov 25 09:33:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:15 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:15 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:15 compute-0 systemd[1]: libpod-1aa17f3458df3b433bea9e26f351f62557dd6cd1a7c1e20396f90d7ecf9d98a1.scope: Deactivated successfully.
Nov 25 09:33:15 compute-0 podman[85659]: 2025-11-25 09:33:15.048440473 +0000 UTC m=+0.882836052 container died 1aa17f3458df3b433bea9e26f351f62557dd6cd1a7c1e20396f90d7ecf9d98a1 (image=quay.io/ceph/ceph:v19, name=gracious_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:33:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cf5947ffaaf6f05c6c1839bf2940f930416429984099df2f7d1530e90366d64-merged.mount: Deactivated successfully.
Nov 25 09:33:15 compute-0 podman[85659]: 2025-11-25 09:33:15.065596047 +0000 UTC m=+0.899991626 container remove 1aa17f3458df3b433bea9e26f351f62557dd6cd1a7c1e20396f90d7ecf9d98a1 (image=quay.io/ceph/ceph:v19, name=gracious_solomon, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:15 compute-0 sudo[85631]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:15 compute-0 systemd[1]: libpod-conmon-1aa17f3458df3b433bea9e26f351f62557dd6cd1a7c1e20396f90d7ecf9d98a1.scope: Deactivated successfully.
Nov 25 09:33:15 compute-0 sudo[85808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyqoikunswrtkptdxtwbkqjhjydoghib ; /usr/bin/python3'
Nov 25 09:33:15 compute-0 sudo[85808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:15 compute-0 python3[85810]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:15 compute-0 podman[85811]: 2025-11-25 09:33:15.30980832 +0000 UTC m=+0.026061651 container create 266d6858a1187a8eefa6a47d09059326f277febfedd708834a6c2582aa29c9ae (image=quay.io/ceph/ceph:v19, name=elegant_leavitt, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:15 compute-0 systemd[1]: Started libpod-conmon-266d6858a1187a8eefa6a47d09059326f277febfedd708834a6c2582aa29c9ae.scope.
Nov 25 09:33:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be54beb4aed88f710a5475675c388a15549b961086d577d24584218cc83d378e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be54beb4aed88f710a5475675c388a15549b961086d577d24584218cc83d378e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:15 compute-0 podman[85811]: 2025-11-25 09:33:15.359203273 +0000 UTC m=+0.075456594 container init 266d6858a1187a8eefa6a47d09059326f277febfedd708834a6c2582aa29c9ae (image=quay.io/ceph/ceph:v19, name=elegant_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 09:33:15 compute-0 podman[85811]: 2025-11-25 09:33:15.364022484 +0000 UTC m=+0.080275805 container start 266d6858a1187a8eefa6a47d09059326f277febfedd708834a6c2582aa29c9ae (image=quay.io/ceph/ceph:v19, name=elegant_leavitt, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:33:15 compute-0 podman[85811]: 2025-11-25 09:33:15.367922286 +0000 UTC m=+0.084175597 container attach 266d6858a1187a8eefa6a47d09059326f277febfedd708834a6c2582aa29c9ae (image=quay.io/ceph/ceph:v19, name=elegant_leavitt, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:15 compute-0 podman[85811]: 2025-11-25 09:33:15.299861766 +0000 UTC m=+0.016115088 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Nov 25 09:33:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 09:33:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Nov 25 09:33:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/370321277' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 09:33:15 compute-0 ceph-mon[74207]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:15 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:15 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:15 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:15 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/146186615' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 09:33:15 compute-0 ceph-mon[74207]: osdmap e22: 3 total, 2 up, 3 in
Nov 25 09:33:15 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:15 compute-0 ceph-mon[74207]: from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 09:33:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/370321277' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.7M
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.7M
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134971801: error parsing value: Value '134971801' is below minimum 939524096
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134971801: error parsing value: Value '134971801' is below minimum 939524096
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/370321277' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 09:33:16 compute-0 elegant_leavitt[85823]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e23 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:16 compute-0 podman[85811]: 2025-11-25 09:33:16.055390378 +0000 UTC m=+0.771643699 container died 266d6858a1187a8eefa6a47d09059326f277febfedd708834a6c2582aa29c9ae (image=quay.io/ceph/ceph:v19, name=elegant_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:33:16 compute-0 systemd[1]: libpod-266d6858a1187a8eefa6a47d09059326f277febfedd708834a6c2582aa29c9ae.scope: Deactivated successfully.
Nov 25 09:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-be54beb4aed88f710a5475675c388a15549b961086d577d24584218cc83d378e-merged.mount: Deactivated successfully.
Nov 25 09:33:16 compute-0 podman[85811]: 2025-11-25 09:33:16.074254669 +0000 UTC m=+0.790507991 container remove 266d6858a1187a8eefa6a47d09059326f277febfedd708834a6c2582aa29c9ae (image=quay.io/ceph/ceph:v19, name=elegant_leavitt, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:16 compute-0 sudo[85847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:33:16 compute-0 sudo[85847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[85847]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 systemd[1]: libpod-conmon-266d6858a1187a8eefa6a47d09059326f277febfedd708834a6c2582aa29c9ae.scope: Deactivated successfully.
Nov 25 09:33:16 compute-0 sudo[85808]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 sudo[85882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:33:16 compute-0 sudo[85882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[85882]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 sudo[85907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:16 compute-0 sudo[85907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[85907]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 sudo[85954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnqogqymfychuvohaichzexhsxfbmpyy ; /usr/bin/python3'
Nov 25 09:33:16 compute-0 sudo[85954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:16 compute-0 sudo[85956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:16 compute-0 sudo[85956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[85956]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 sudo[85983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:16 compute-0 sudo[85983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[85983]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 python3[85960]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:16 compute-0 sudo[86031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:16 compute-0 sudo[86031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[86031]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 podman[86039]: 2025-11-25 09:33:16.333399422 +0000 UTC m=+0.029702844 container create d0e1a0d7a5799d8fa02102256b847e10f0f9cefe4ec0c8c68ffa1f645b596998 (image=quay.io/ceph/ceph:v19, name=stoic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:33:16 compute-0 systemd[1]: Started libpod-conmon-d0e1a0d7a5799d8fa02102256b847e10f0f9cefe4ec0c8c68ffa1f645b596998.scope.
Nov 25 09:33:16 compute-0 sudo[86065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:16 compute-0 sudo[86065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[86065]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127aa243776d8a79e781abd88feec5f15fc4e53b9d359f6a70eb4a9ad2722afa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127aa243776d8a79e781abd88feec5f15fc4e53b9d359f6a70eb4a9ad2722afa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:16 compute-0 podman[86039]: 2025-11-25 09:33:16.388301902 +0000 UTC m=+0.084605335 container init d0e1a0d7a5799d8fa02102256b847e10f0f9cefe4ec0c8c68ffa1f645b596998 (image=quay.io/ceph/ceph:v19, name=stoic_grothendieck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:16 compute-0 podman[86039]: 2025-11-25 09:33:16.39305999 +0000 UTC m=+0.089363401 container start d0e1a0d7a5799d8fa02102256b847e10f0f9cefe4ec0c8c68ffa1f645b596998 (image=quay.io/ceph/ceph:v19, name=stoic_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:33:16 compute-0 podman[86039]: 2025-11-25 09:33:16.394318067 +0000 UTC m=+0.090621479 container attach d0e1a0d7a5799d8fa02102256b847e10f0f9cefe4ec0c8c68ffa1f645b596998 (image=quay.io/ceph/ceph:v19, name=stoic_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:33:16 compute-0 sudo[86097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 25 09:33:16 compute-0 sudo[86097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[86097]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:16 compute-0 podman[86039]: 2025-11-25 09:33:16.322335946 +0000 UTC m=+0.018639377 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:16 compute-0 sudo[86123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:16 compute-0 sudo[86123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[86123]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 sudo[86149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:16 compute-0 sudo[86149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[86149]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 sudo[86192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:16 compute-0 sudo[86192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[86192]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 sudo[86217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:16 compute-0 sudo[86217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[86217]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 sudo[86242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:16 compute-0 sudo[86242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[86242]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1194115357' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 09:33:16 compute-0 sudo[86291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:16 compute-0 sudo[86291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[86291]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 sudo[86316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:16 compute-0 sudo[86316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[86316]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:16 compute-0 sudo[86341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:16 compute-0 sudo[86341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 sudo[86341]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:16 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:16 compute-0 sudo[86366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:16 compute-0 sudo[86366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:16 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:16 compute-0 sudo[86366]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:16 compute-0 sudo[86391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:33:16 compute-0 sudo[86391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:17 compute-0 ceph-mon[74207]: Adjusting osd_memory_target on compute-2 to 128.7M
Nov 25 09:33:17 compute-0 ceph-mon[74207]: Unable to set osd_memory_target on compute-2 to 134971801: error parsing value: Value '134971801' is below minimum 939524096
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:17 compute-0 ceph-mon[74207]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:33:17 compute-0 ceph-mon[74207]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:33:17 compute-0 ceph-mon[74207]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/370321277' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 09:33:17 compute-0 ceph-mon[74207]: osdmap e23: 3 total, 2 up, 3 in
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:17 compute-0 ceph-mon[74207]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:17 compute-0 ceph-mon[74207]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:17 compute-0 ceph-mon[74207]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1194115357' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:17 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:17 compute-0 ceph-mon[74207]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 25 09:33:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 25 09:33:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1194115357' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 09:33:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Nov 25 09:33:17 compute-0 stoic_grothendieck[86093]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 25 09:33:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Nov 25 09:33:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:17 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:17 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1797544192; not ready for session (expect reconnect)
Nov 25 09:33:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:17 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:17 compute-0 systemd[1]: libpod-d0e1a0d7a5799d8fa02102256b847e10f0f9cefe4ec0c8c68ffa1f645b596998.scope: Deactivated successfully.
Nov 25 09:33:17 compute-0 conmon[86093]: conmon d0e1a0d7a5799d8fa021 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0e1a0d7a5799d8fa02102256b847e10f0f9cefe4ec0c8c68ffa1f645b596998.scope/container/memory.events
Nov 25 09:33:17 compute-0 podman[86039]: 2025-11-25 09:33:17.05730639 +0000 UTC m=+0.753609801 container died d0e1a0d7a5799d8fa02102256b847e10f0f9cefe4ec0c8c68ffa1f645b596998 (image=quay.io/ceph/ceph:v19, name=stoic_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-127aa243776d8a79e781abd88feec5f15fc4e53b9d359f6a70eb4a9ad2722afa-merged.mount: Deactivated successfully.
Nov 25 09:33:17 compute-0 podman[86039]: 2025-11-25 09:33:17.080872472 +0000 UTC m=+0.777175883 container remove d0e1a0d7a5799d8fa02102256b847e10f0f9cefe4ec0c8c68ffa1f645b596998 (image=quay.io/ceph/ceph:v19, name=stoic_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:33:17 compute-0 sudo[85954]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:17 compute-0 systemd[1]: libpod-conmon-d0e1a0d7a5799d8fa02102256b847e10f0f9cefe4ec0c8c68ffa1f645b596998.scope: Deactivated successfully.
Nov 25 09:33:17 compute-0 podman[86457]: 2025-11-25 09:33:17.179860229 +0000 UTC m=+0.024516161 container create e87122041235c30d880170ef591c130fad468ccf27a11396bd438b0792ad5fc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_leavitt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:17 compute-0 systemd[1]: Started libpod-conmon-e87122041235c30d880170ef591c130fad468ccf27a11396bd438b0792ad5fc2.scope.
Nov 25 09:33:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:17 compute-0 podman[86457]: 2025-11-25 09:33:17.219997575 +0000 UTC m=+0.064653527 container init e87122041235c30d880170ef591c130fad468ccf27a11396bd438b0792ad5fc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_leavitt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:17 compute-0 podman[86457]: 2025-11-25 09:33:17.223758566 +0000 UTC m=+0.068414507 container start e87122041235c30d880170ef591c130fad468ccf27a11396bd438b0792ad5fc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 09:33:17 compute-0 podman[86457]: 2025-11-25 09:33:17.224954907 +0000 UTC m=+0.069610868 container attach e87122041235c30d880170ef591c130fad468ccf27a11396bd438b0792ad5fc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:17 compute-0 sharp_leavitt[86470]: 167 167
Nov 25 09:33:17 compute-0 systemd[1]: libpod-e87122041235c30d880170ef591c130fad468ccf27a11396bd438b0792ad5fc2.scope: Deactivated successfully.
Nov 25 09:33:17 compute-0 podman[86457]: 2025-11-25 09:33:17.226594102 +0000 UTC m=+0.071250044 container died e87122041235c30d880170ef591c130fad468ccf27a11396bd438b0792ad5fc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fc84d13fec071493f68554bb7bd8bedad6270c0aa2227601ef24da26f266b34-merged.mount: Deactivated successfully.
Nov 25 09:33:17 compute-0 podman[86457]: 2025-11-25 09:33:17.246436965 +0000 UTC m=+0.091092907 container remove e87122041235c30d880170ef591c130fad468ccf27a11396bd438b0792ad5fc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 09:33:17 compute-0 podman[86457]: 2025-11-25 09:33:17.169220591 +0000 UTC m=+0.013876553 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:17 compute-0 systemd[1]: libpod-conmon-e87122041235c30d880170ef591c130fad468ccf27a11396bd438b0792ad5fc2.scope: Deactivated successfully.
Nov 25 09:33:17 compute-0 podman[86492]: 2025-11-25 09:33:17.348310245 +0000 UTC m=+0.024650956 container create dcb38b9222ab0990444a39dce505baa3e6680455ef858861e658ffd11169f104 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:33:17 compute-0 systemd[1]: Started libpod-conmon-dcb38b9222ab0990444a39dce505baa3e6680455ef858861e658ffd11169f104.scope.
Nov 25 09:33:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240a3a027ee2c660446061ed371cc0efdaaefc62645d8653ddbd654052487d24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240a3a027ee2c660446061ed371cc0efdaaefc62645d8653ddbd654052487d24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240a3a027ee2c660446061ed371cc0efdaaefc62645d8653ddbd654052487d24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240a3a027ee2c660446061ed371cc0efdaaefc62645d8653ddbd654052487d24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240a3a027ee2c660446061ed371cc0efdaaefc62645d8653ddbd654052487d24/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:17 compute-0 podman[86492]: 2025-11-25 09:33:17.39920454 +0000 UTC m=+0.075545250 container init dcb38b9222ab0990444a39dce505baa3e6680455ef858861e658ffd11169f104 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:17 compute-0 podman[86492]: 2025-11-25 09:33:17.403519052 +0000 UTC m=+0.079859762 container start dcb38b9222ab0990444a39dce505baa3e6680455ef858861e658ffd11169f104 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hugle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:33:17 compute-0 podman[86492]: 2025-11-25 09:33:17.40455375 +0000 UTC m=+0.080894460 container attach dcb38b9222ab0990444a39dce505baa3e6680455ef858861e658ffd11169f104 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hugle, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:17 compute-0 podman[86492]: 2025-11-25 09:33:17.337928923 +0000 UTC m=+0.014269653 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:17 compute-0 wizardly_hugle[86505]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:33:17 compute-0 wizardly_hugle[86505]: --> All data devices are unavailable
Nov 25 09:33:17 compute-0 systemd[1]: libpod-dcb38b9222ab0990444a39dce505baa3e6680455ef858861e658ffd11169f104.scope: Deactivated successfully.
Nov 25 09:33:17 compute-0 podman[86492]: 2025-11-25 09:33:17.651774587 +0000 UTC m=+0.328115297 container died dcb38b9222ab0990444a39dce505baa3e6680455ef858861e658ffd11169f104 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 25 09:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-240a3a027ee2c660446061ed371cc0efdaaefc62645d8653ddbd654052487d24-merged.mount: Deactivated successfully.
Nov 25 09:33:17 compute-0 podman[86492]: 2025-11-25 09:33:17.670338251 +0000 UTC m=+0.346678961 container remove dcb38b9222ab0990444a39dce505baa3e6680455ef858861e658ffd11169f104 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 25 09:33:17 compute-0 systemd[1]: libpod-conmon-dcb38b9222ab0990444a39dce505baa3e6680455ef858861e658ffd11169f104.scope: Deactivated successfully.
Nov 25 09:33:17 compute-0 sudo[86391]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:17 compute-0 sudo[86553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:17 compute-0 sudo[86553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:17 compute-0 sudo[86553]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:17 compute-0 sudo[86601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:33:17 compute-0 sudo[86601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:17 compute-0 python3[86655]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:33:18 compute-0 ceph-mon[74207]: from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 25 09:33:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1194115357' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 09:33:18 compute-0 ceph-mon[74207]: osdmap e24: 3 total, 2 up, 3 in
Nov 25 09:33:18 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:18 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:18 compute-0 podman[86734]: 2025-11-25 09:33:18.050935861 +0000 UTC m=+0.026348048 container create ab128115acfa5030bd74dcf234a63d184839dcad2cfb807603533889a22ee11c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_blackburn, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 09:33:18 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1797544192; not ready for session (expect reconnect)
Nov 25 09:33:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:18 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:18 compute-0 systemd[1]: Started libpod-conmon-ab128115acfa5030bd74dcf234a63d184839dcad2cfb807603533889a22ee11c.scope.
Nov 25 09:33:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:18 compute-0 podman[86734]: 2025-11-25 09:33:18.094235209 +0000 UTC m=+0.069647387 container init ab128115acfa5030bd74dcf234a63d184839dcad2cfb807603533889a22ee11c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:18 compute-0 podman[86734]: 2025-11-25 09:33:18.099904191 +0000 UTC m=+0.075316369 container start ab128115acfa5030bd74dcf234a63d184839dcad2cfb807603533889a22ee11c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Nov 25 09:33:18 compute-0 podman[86734]: 2025-11-25 09:33:18.100908531 +0000 UTC m=+0.076320709 container attach ab128115acfa5030bd74dcf234a63d184839dcad2cfb807603533889a22ee11c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_blackburn, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:18 compute-0 boring_blackburn[86771]: 167 167
Nov 25 09:33:18 compute-0 systemd[1]: libpod-ab128115acfa5030bd74dcf234a63d184839dcad2cfb807603533889a22ee11c.scope: Deactivated successfully.
Nov 25 09:33:18 compute-0 podman[86734]: 2025-11-25 09:33:18.10268239 +0000 UTC m=+0.078094568 container died ab128115acfa5030bd74dcf234a63d184839dcad2cfb807603533889a22ee11c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_blackburn, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-724494233c4b3175515e983cb8b5105b44ac001bce5240fdf95d3d028b9303e8-merged.mount: Deactivated successfully.
Nov 25 09:33:18 compute-0 podman[86734]: 2025-11-25 09:33:18.119034443 +0000 UTC m=+0.094446620 container remove ab128115acfa5030bd74dcf234a63d184839dcad2cfb807603533889a22ee11c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_blackburn, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:18 compute-0 podman[86734]: 2025-11-25 09:33:18.040930358 +0000 UTC m=+0.016342554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:18 compute-0 systemd[1]: libpod-conmon-ab128115acfa5030bd74dcf234a63d184839dcad2cfb807603533889a22ee11c.scope: Deactivated successfully.
Nov 25 09:33:18 compute-0 python3[86767]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063197.6989856-37566-28593177610236/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:33:18 compute-0 podman[86793]: 2025-11-25 09:33:18.229592491 +0000 UTC m=+0.025726419 container create 932539e55232b15cdd77aa8598d41d46c8ecd9bcf5999beefb4185d0425d3cec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_almeida, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 09:33:18 compute-0 systemd[1]: Started libpod-conmon-932539e55232b15cdd77aa8598d41d46c8ecd9bcf5999beefb4185d0425d3cec.scope.
Nov 25 09:33:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1789ef5ce5ba21da3d4e02fb962df3d84a091392d03dabe11af449336b795303/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1789ef5ce5ba21da3d4e02fb962df3d84a091392d03dabe11af449336b795303/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1789ef5ce5ba21da3d4e02fb962df3d84a091392d03dabe11af449336b795303/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1789ef5ce5ba21da3d4e02fb962df3d84a091392d03dabe11af449336b795303/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:18 compute-0 podman[86793]: 2025-11-25 09:33:18.284825404 +0000 UTC m=+0.080959332 container init 932539e55232b15cdd77aa8598d41d46c8ecd9bcf5999beefb4185d0425d3cec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:18 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 24 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=24 pruub=15.482992172s) [] r=-1 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active pruub 65.508193970s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:33:18 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 24 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=24 pruub=15.482992172s) [] r=-1 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.508193970s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:33:18 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 24 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=24 pruub=13.473212242s) [] r=-1 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active pruub 63.498683929s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:33:18 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 24 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=24 pruub=13.473212242s) [] r=-1 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.498683929s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:33:18 compute-0 podman[86793]: 2025-11-25 09:33:18.291382597 +0000 UTC m=+0.087516525 container start 932539e55232b15cdd77aa8598d41d46c8ecd9bcf5999beefb4185d0425d3cec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:33:18 compute-0 podman[86793]: 2025-11-25 09:33:18.292251974 +0000 UTC m=+0.088385901 container attach 932539e55232b15cdd77aa8598d41d46c8ecd9bcf5999beefb4185d0425d3cec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_almeida, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:18 compute-0 podman[86793]: 2025-11-25 09:33:18.218520127 +0000 UTC m=+0.014654076 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:18 compute-0 jolly_almeida[86830]: {
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:     "1": [
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:         {
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             "devices": [
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "/dev/loop3"
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             ],
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             "lv_name": "ceph_lv0",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             "lv_size": "21470642176",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             "name": "ceph_lv0",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             "tags": {
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.cluster_name": "ceph",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.crush_device_class": "",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.encrypted": "0",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.osd_id": "1",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.type": "block",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.vdo": "0",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:                 "ceph.with_tpm": "0"
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             },
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             "type": "block",
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:             "vg_name": "ceph_vg0"
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:         }
Nov 25 09:33:18 compute-0 jolly_almeida[86830]:     ]
Nov 25 09:33:18 compute-0 jolly_almeida[86830]: }
Nov 25 09:33:18 compute-0 systemd[1]: libpod-932539e55232b15cdd77aa8598d41d46c8ecd9bcf5999beefb4185d0425d3cec.scope: Deactivated successfully.
Nov 25 09:33:18 compute-0 podman[86793]: 2025-11-25 09:33:18.528409875 +0000 UTC m=+0.324543813 container died 932539e55232b15cdd77aa8598d41d46c8ecd9bcf5999beefb4185d0425d3cec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:33:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1789ef5ce5ba21da3d4e02fb962df3d84a091392d03dabe11af449336b795303-merged.mount: Deactivated successfully.
Nov 25 09:33:18 compute-0 podman[86793]: 2025-11-25 09:33:18.554322554 +0000 UTC m=+0.350456482 container remove 932539e55232b15cdd77aa8598d41d46c8ecd9bcf5999beefb4185d0425d3cec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 25 09:33:18 compute-0 systemd[1]: libpod-conmon-932539e55232b15cdd77aa8598d41d46c8ecd9bcf5999beefb4185d0425d3cec.scope: Deactivated successfully.
Nov 25 09:33:18 compute-0 sudo[86920]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udzpjiaockgunfrrgsuokahqmqzldqec ; /usr/bin/python3'
Nov 25 09:33:18 compute-0 sudo[86920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:18 compute-0 sudo[86601]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:18 compute-0 sudo[86927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:18 compute-0 sudo[86927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:18 compute-0 sudo[86927]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:18 compute-0 sudo[86952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:33:18 compute-0 sudo[86952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:18 compute-0 python3[86926]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:33:18 compute-0 sudo[86920]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:18 compute-0 sudo[87061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnevsdfkuihpjxhovdsunhjxntvbmgpm ; /usr/bin/python3'
Nov 25 09:33:18 compute-0 sudo[87061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:18 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:18 compute-0 podman[87083]: 2025-11-25 09:33:18.9429656 +0000 UTC m=+0.027506893 container create 528a970a980c8ad45a15091be41f7ee9ae28b66987c9d0551a46f5bd78d2747f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bartik, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:33:18 compute-0 systemd[1]: Started libpod-conmon-528a970a980c8ad45a15091be41f7ee9ae28b66987c9d0551a46f5bd78d2747f.scope.
Nov 25 09:33:18 compute-0 python3[87070]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063198.4366415-37580-97934718896963/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=44d8da488fcc28923085cf17e23d9c4852856ae4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:33:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:18 compute-0 podman[87083]: 2025-11-25 09:33:18.994972117 +0000 UTC m=+0.079513430 container init 528a970a980c8ad45a15091be41f7ee9ae28b66987c9d0551a46f5bd78d2747f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:33:19 compute-0 sudo[87061]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:19 compute-0 podman[87083]: 2025-11-25 09:33:19.00496291 +0000 UTC m=+0.089504203 container start 528a970a980c8ad45a15091be41f7ee9ae28b66987c9d0551a46f5bd78d2747f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:19 compute-0 podman[87083]: 2025-11-25 09:33:19.00611928 +0000 UTC m=+0.090660572 container attach 528a970a980c8ad45a15091be41f7ee9ae28b66987c9d0551a46f5bd78d2747f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:33:19 compute-0 affectionate_bartik[87096]: 167 167
Nov 25 09:33:19 compute-0 systemd[1]: libpod-528a970a980c8ad45a15091be41f7ee9ae28b66987c9d0551a46f5bd78d2747f.scope: Deactivated successfully.
Nov 25 09:33:19 compute-0 podman[87083]: 2025-11-25 09:33:19.00849038 +0000 UTC m=+0.093031672 container died 528a970a980c8ad45a15091be41f7ee9ae28b66987c9d0551a46f5bd78d2747f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 09:33:19 compute-0 podman[87083]: 2025-11-25 09:33:19.024301776 +0000 UTC m=+0.108843068 container remove 528a970a980c8ad45a15091be41f7ee9ae28b66987c9d0551a46f5bd78d2747f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:19 compute-0 podman[87083]: 2025-11-25 09:33:18.931141231 +0000 UTC m=+0.015682543 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 09:33:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 25 09:33:19 compute-0 systemd[1]: libpod-conmon-528a970a980c8ad45a15091be41f7ee9ae28b66987c9d0551a46f5bd78d2747f.scope: Deactivated successfully.
Nov 25 09:33:19 compute-0 ceph-mon[74207]: purged_snaps scrub starts
Nov 25 09:33:19 compute-0 ceph-mon[74207]: purged_snaps scrub ok
Nov 25 09:33:19 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:19 compute-0 ceph-mon[74207]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 25 09:33:19 compute-0 ceph-mgr[74476]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1797544192; not ready for session (expect reconnect)
Nov 25 09:33:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:19 compute-0 ceph-mgr[74476]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 09:33:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192] boot
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 25 09:33:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e084828e7d62b252b797cfb6b009aff79838e918f1e1123b0670af4332c0a13c-merged.mount: Deactivated successfully.
Nov 25 09:33:19 compute-0 podman[87143]: 2025-11-25 09:33:19.138371616 +0000 UTC m=+0.026944313 container create 760f4fa50da8998ebed0c8963b0f3bf24ab5d3581b7f76bc25410b7d0f967d1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_archimedes, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:19 compute-0 systemd[1]: Started libpod-conmon-760f4fa50da8998ebed0c8963b0f3bf24ab5d3581b7f76bc25410b7d0f967d1e.scope.
Nov 25 09:33:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d43028d9f8d8c431169ba3f961a5f05ea12d8bcc6c5c94ab36f83fc7c8d2bb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d43028d9f8d8c431169ba3f961a5f05ea12d8bcc6c5c94ab36f83fc7c8d2bb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d43028d9f8d8c431169ba3f961a5f05ea12d8bcc6c5c94ab36f83fc7c8d2bb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d43028d9f8d8c431169ba3f961a5f05ea12d8bcc6c5c94ab36f83fc7c8d2bb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:19 compute-0 sudo[87183]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvbxfswqplltsdipylixakhnomdvpuvp ; /usr/bin/python3'
Nov 25 09:33:19 compute-0 sudo[87183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:19 compute-0 podman[87143]: 2025-11-25 09:33:19.188714216 +0000 UTC m=+0.077286933 container init 760f4fa50da8998ebed0c8963b0f3bf24ab5d3581b7f76bc25410b7d0f967d1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 25 09:33:19 compute-0 podman[87143]: 2025-11-25 09:33:19.195764777 +0000 UTC m=+0.084337473 container start 760f4fa50da8998ebed0c8963b0f3bf24ab5d3581b7f76bc25410b7d0f967d1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_archimedes, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 09:33:19 compute-0 podman[87143]: 2025-11-25 09:33:19.204949231 +0000 UTC m=+0.093521947 container attach 760f4fa50da8998ebed0c8963b0f3bf24ab5d3581b7f76bc25410b7d0f967d1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Nov 25 09:33:19 compute-0 podman[87143]: 2025-11-25 09:33:19.126850528 +0000 UTC m=+0.015423244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:19 compute-0 python3[87186]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:19 compute-0 podman[87188]: 2025-11-25 09:33:19.345617772 +0000 UTC m=+0.034118697 container create a8a55d7992f4be02523ad43e9da1f5cc1843d4d059507f1b6f2850f9835e49b4 (image=quay.io/ceph/ceph:v19, name=stoic_lalande, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:19 compute-0 systemd[1]: Started libpod-conmon-a8a55d7992f4be02523ad43e9da1f5cc1843d4d059507f1b6f2850f9835e49b4.scope.
Nov 25 09:33:19 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 25 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=25 pruub=14.387210846s) [2] r=-1 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.508193970s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:33:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:19 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 25 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=25 pruub=14.387179375s) [2] r=-1 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.508193970s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:33:19 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 25 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=25 pruub=12.377623558s) [2] r=-1 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.498683929s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:33:19 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 25 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=25 pruub=12.377599716s) [2] r=-1 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.498683929s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef4f06642dc7daf11732041a1064aa5c1250c4c83a5fb6d8e8013ebc19b3871/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef4f06642dc7daf11732041a1064aa5c1250c4c83a5fb6d8e8013ebc19b3871/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef4f06642dc7daf11732041a1064aa5c1250c4c83a5fb6d8e8013ebc19b3871/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:19 compute-0 podman[87188]: 2025-11-25 09:33:19.398276929 +0000 UTC m=+0.086777854 container init a8a55d7992f4be02523ad43e9da1f5cc1843d4d059507f1b6f2850f9835e49b4 (image=quay.io/ceph/ceph:v19, name=stoic_lalande, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:33:19 compute-0 podman[87188]: 2025-11-25 09:33:19.402990695 +0000 UTC m=+0.091491620 container start a8a55d7992f4be02523ad43e9da1f5cc1843d4d059507f1b6f2850f9835e49b4 (image=quay.io/ceph/ceph:v19, name=stoic_lalande, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Nov 25 09:33:19 compute-0 podman[87188]: 2025-11-25 09:33:19.404587345 +0000 UTC m=+0.093088270 container attach a8a55d7992f4be02523ad43e9da1f5cc1843d4d059507f1b6f2850f9835e49b4 (image=quay.io/ceph/ceph:v19, name=stoic_lalande, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Nov 25 09:33:19 compute-0 podman[87188]: 2025-11-25 09:33:19.333755361 +0000 UTC m=+0.022256286 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:19 compute-0 lvm[87293]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:33:19 compute-0 lvm[87293]: VG ceph_vg0 finished
Nov 25 09:33:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2793311854' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 09:33:19 compute-0 fervent_archimedes[87175]: {}
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2793311854' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 09:33:19 compute-0 stoic_lalande[87207]: 
Nov 25 09:33:19 compute-0 stoic_lalande[87207]: [global]
Nov 25 09:33:19 compute-0 stoic_lalande[87207]:         fsid = af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:19 compute-0 stoic_lalande[87207]:         mon_host = 192.168.122.100
Nov 25 09:33:19 compute-0 systemd[1]: libpod-a8a55d7992f4be02523ad43e9da1f5cc1843d4d059507f1b6f2850f9835e49b4.scope: Deactivated successfully.
Nov 25 09:33:19 compute-0 podman[87188]: 2025-11-25 09:33:19.705345187 +0000 UTC m=+0.393846113 container died a8a55d7992f4be02523ad43e9da1f5cc1843d4d059507f1b6f2850f9835e49b4 (image=quay.io/ceph/ceph:v19, name=stoic_lalande, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:33:19 compute-0 systemd[1]: libpod-760f4fa50da8998ebed0c8963b0f3bf24ab5d3581b7f76bc25410b7d0f967d1e.scope: Deactivated successfully.
Nov 25 09:33:19 compute-0 conmon[87175]: conmon 760f4fa50da8998ebed0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-760f4fa50da8998ebed0c8963b0f3bf24ab5d3581b7f76bc25410b7d0f967d1e.scope/container/memory.events
Nov 25 09:33:19 compute-0 podman[87143]: 2025-11-25 09:33:19.708271795 +0000 UTC m=+0.596844490 container died 760f4fa50da8998ebed0c8963b0f3bf24ab5d3581b7f76bc25410b7d0f967d1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_archimedes, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:33:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ef4f06642dc7daf11732041a1064aa5c1250c4c83a5fb6d8e8013ebc19b3871-merged.mount: Deactivated successfully.
Nov 25 09:33:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d43028d9f8d8c431169ba3f961a5f05ea12d8bcc6c5c94ab36f83fc7c8d2bb9-merged.mount: Deactivated successfully.
Nov 25 09:33:19 compute-0 podman[87188]: 2025-11-25 09:33:19.737105849 +0000 UTC m=+0.425606764 container remove a8a55d7992f4be02523ad43e9da1f5cc1843d4d059507f1b6f2850f9835e49b4 (image=quay.io/ceph/ceph:v19, name=stoic_lalande, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 25 09:33:19 compute-0 podman[87143]: 2025-11-25 09:33:19.74128128 +0000 UTC m=+0.629853976 container remove 760f4fa50da8998ebed0c8963b0f3bf24ab5d3581b7f76bc25410b7d0f967d1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_archimedes, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:19 compute-0 systemd[1]: libpod-conmon-a8a55d7992f4be02523ad43e9da1f5cc1843d4d059507f1b6f2850f9835e49b4.scope: Deactivated successfully.
Nov 25 09:33:19 compute-0 sudo[87183]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:19 compute-0 systemd[1]: libpod-conmon-760f4fa50da8998ebed0c8963b0f3bf24ab5d3581b7f76bc25410b7d0f967d1e.scope: Deactivated successfully.
Nov 25 09:33:19 compute-0 sudo[86952]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:19 compute-0 sudo[87318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:33:19 compute-0 sudo[87318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:19 compute-0 sudo[87318]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:19 compute-0 sudo[87366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diurmlpjdexbypzbqqqvjziqmlpdokcz ; /usr/bin/python3'
Nov 25 09:33:19 compute-0 sudo[87366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:19 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Nov 25 09:33:19 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Nov 25 09:33:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:33:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:33:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:19 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 09:33:19 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 09:33:19 compute-0 sudo[87369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:19 compute-0 sudo[87369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:19 compute-0 sudo[87369]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:19 compute-0 python3[87368]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:19 compute-0 sudo[87394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:19 compute-0 sudo[87394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:20 compute-0 podman[87412]: 2025-11-25 09:33:20.016296055 +0000 UTC m=+0.029482138 container create e6c0acb0358b72d40da142a8d5fa9c30fd56e471f5c27e911616d50e70bbc51a (image=quay.io/ceph/ceph:v19, name=brave_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:20 compute-0 systemd[1]: Started libpod-conmon-e6c0acb0358b72d40da142a8d5fa9c30fd56e471f5c27e911616d50e70bbc51a.scope.
Nov 25 09:33:20 compute-0 ceph-mon[74207]: OSD bench result of 23457.770996 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 09:33:20 compute-0 ceph-mon[74207]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: Cluster is now healthy
Nov 25 09:33:20 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:20 compute-0 ceph-mon[74207]: osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192] boot
Nov 25 09:33:20 compute-0 ceph-mon[74207]: osdmap e25: 3 total, 3 up, 3 in
Nov 25 09:33:20 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2793311854' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 09:33:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2793311854' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 09:33:20 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:20 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:20 compute-0 ceph-mon[74207]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 25 09:33:20 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:33:20 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:33:20 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:20 compute-0 ceph-mon[74207]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 09:33:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 25 09:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d9ac7cbbe654ae9f132453b82ca0dcc0c44eb0fbfca922cf1358c9c811f8d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d9ac7cbbe654ae9f132453b82ca0dcc0c44eb0fbfca922cf1358c9c811f8d0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d9ac7cbbe654ae9f132453b82ca0dcc0c44eb0fbfca922cf1358c9c811f8d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 25 09:33:20 compute-0 podman[87412]: 2025-11-25 09:33:20.068769271 +0000 UTC m=+0.081955374 container init e6c0acb0358b72d40da142a8d5fa9c30fd56e471f5c27e911616d50e70bbc51a (image=quay.io/ceph/ceph:v19, name=brave_colden, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 25 09:33:20 compute-0 podman[87412]: 2025-11-25 09:33:20.080146067 +0000 UTC m=+0.093332151 container start e6c0acb0358b72d40da142a8d5fa9c30fd56e471f5c27e911616d50e70bbc51a (image=quay.io/ceph/ceph:v19, name=brave_colden, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:20 compute-0 podman[87412]: 2025-11-25 09:33:20.081402225 +0000 UTC m=+0.094588308 container attach e6c0acb0358b72d40da142a8d5fa9c30fd56e471f5c27e911616d50e70bbc51a (image=quay.io/ceph/ceph:v19, name=brave_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 09:33:20 compute-0 podman[87412]: 2025-11-25 09:33:20.004764637 +0000 UTC m=+0.017950740 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:20 compute-0 podman[87469]: 2025-11-25 09:33:20.233720377 +0000 UTC m=+0.026281534 container create 63091fe1f4567532849ee1259ebb0403d94b54799026bf2ffa4263266451fc7c (image=quay.io/ceph/ceph:v19, name=goofy_villani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:33:20 compute-0 systemd[1]: Started libpod-conmon-63091fe1f4567532849ee1259ebb0403d94b54799026bf2ffa4263266451fc7c.scope.
Nov 25 09:33:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:20 compute-0 podman[87469]: 2025-11-25 09:33:20.28065378 +0000 UTC m=+0.073214948 container init 63091fe1f4567532849ee1259ebb0403d94b54799026bf2ffa4263266451fc7c (image=quay.io/ceph/ceph:v19, name=goofy_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 25 09:33:20 compute-0 podman[87469]: 2025-11-25 09:33:20.284664721 +0000 UTC m=+0.077225878 container start 63091fe1f4567532849ee1259ebb0403d94b54799026bf2ffa4263266451fc7c (image=quay.io/ceph/ceph:v19, name=goofy_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:20 compute-0 podman[87469]: 2025-11-25 09:33:20.285675075 +0000 UTC m=+0.078236232 container attach 63091fe1f4567532849ee1259ebb0403d94b54799026bf2ffa4263266451fc7c (image=quay.io/ceph/ceph:v19, name=goofy_villani, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:33:20 compute-0 goofy_villani[87482]: 167 167
Nov 25 09:33:20 compute-0 systemd[1]: libpod-63091fe1f4567532849ee1259ebb0403d94b54799026bf2ffa4263266451fc7c.scope: Deactivated successfully.
Nov 25 09:33:20 compute-0 podman[87469]: 2025-11-25 09:33:20.287133054 +0000 UTC m=+0.079694211 container died 63091fe1f4567532849ee1259ebb0403d94b54799026bf2ffa4263266451fc7c (image=quay.io/ceph/ceph:v19, name=goofy_villani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d28409680e9791539ff77fa1cb65574c61c63969f79316746a399c2020364a4-merged.mount: Deactivated successfully.
Nov 25 09:33:20 compute-0 podman[87469]: 2025-11-25 09:33:20.30594125 +0000 UTC m=+0.098502406 container remove 63091fe1f4567532849ee1259ebb0403d94b54799026bf2ffa4263266451fc7c (image=quay.io/ceph/ceph:v19, name=goofy_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:20 compute-0 podman[87469]: 2025-11-25 09:33:20.22293165 +0000 UTC m=+0.015492827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:20 compute-0 systemd[1]: libpod-conmon-63091fe1f4567532849ee1259ebb0403d94b54799026bf2ffa4263266451fc7c.scope: Deactivated successfully.
Nov 25 09:33:20 compute-0 sudo[87394]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:20 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.zcfgby (monmap changed)...
Nov 25 09:33:20 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.zcfgby (monmap changed)...
Nov 25 09:33:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.zcfgby", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zcfgby", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 09:33:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:33:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:20 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.zcfgby on compute-0
Nov 25 09:33:20 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.zcfgby on compute-0
Nov 25 09:33:20 compute-0 sudo[87497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:20 compute-0 sudo[87497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:20 compute-0 sudo[87497]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/736770130' entity='client.admin' 
Nov 25 09:33:20 compute-0 brave_colden[87432]: set ssl_option
Nov 25 09:33:20 compute-0 sudo[87522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:20 compute-0 sudo[87522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:20 compute-0 systemd[1]: libpod-e6c0acb0358b72d40da142a8d5fa9c30fd56e471f5c27e911616d50e70bbc51a.scope: Deactivated successfully.
Nov 25 09:33:20 compute-0 podman[87412]: 2025-11-25 09:33:20.436121559 +0000 UTC m=+0.449307642 container died e6c0acb0358b72d40da142a8d5fa9c30fd56e471f5c27e911616d50e70bbc51a (image=quay.io/ceph/ceph:v19, name=brave_colden, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:33:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-15d9ac7cbbe654ae9f132453b82ca0dcc0c44eb0fbfca922cf1358c9c811f8d0-merged.mount: Deactivated successfully.
Nov 25 09:33:20 compute-0 podman[87412]: 2025-11-25 09:33:20.456439632 +0000 UTC m=+0.469625714 container remove e6c0acb0358b72d40da142a8d5fa9c30fd56e471f5c27e911616d50e70bbc51a (image=quay.io/ceph/ceph:v19, name=brave_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 25 09:33:20 compute-0 systemd[1]: libpod-conmon-e6c0acb0358b72d40da142a8d5fa9c30fd56e471f5c27e911616d50e70bbc51a.scope: Deactivated successfully.
Nov 25 09:33:20 compute-0 sudo[87366]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:20 compute-0 sudo[87581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvfhdsetgonvdwcezkmumqupgjpvxitv ; /usr/bin/python3'
Nov 25 09:33:20 compute-0 sudo[87581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:20 compute-0 podman[87598]: 2025-11-25 09:33:20.670461699 +0000 UTC m=+0.028640440 container create 26cb218dcb89d1403d516e7d48026e822707a5922d697a1f3f7b5c3648a70d26 (image=quay.io/ceph/ceph:v19, name=gifted_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:20 compute-0 systemd[1]: Started libpod-conmon-26cb218dcb89d1403d516e7d48026e822707a5922d697a1f3f7b5c3648a70d26.scope.
Nov 25 09:33:20 compute-0 python3[87585]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:20 compute-0 podman[87598]: 2025-11-25 09:33:20.714549559 +0000 UTC m=+0.072728300 container init 26cb218dcb89d1403d516e7d48026e822707a5922d697a1f3f7b5c3648a70d26 (image=quay.io/ceph/ceph:v19, name=gifted_cohen, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:33:20 compute-0 podman[87598]: 2025-11-25 09:33:20.723192441 +0000 UTC m=+0.081371172 container start 26cb218dcb89d1403d516e7d48026e822707a5922d697a1f3f7b5c3648a70d26 (image=quay.io/ceph/ceph:v19, name=gifted_cohen, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:20 compute-0 gifted_cohen[87611]: 167 167
Nov 25 09:33:20 compute-0 podman[87598]: 2025-11-25 09:33:20.724734028 +0000 UTC m=+0.082912779 container attach 26cb218dcb89d1403d516e7d48026e822707a5922d697a1f3f7b5c3648a70d26 (image=quay.io/ceph/ceph:v19, name=gifted_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:20 compute-0 systemd[1]: libpod-26cb218dcb89d1403d516e7d48026e822707a5922d697a1f3f7b5c3648a70d26.scope: Deactivated successfully.
Nov 25 09:33:20 compute-0 podman[87598]: 2025-11-25 09:33:20.726159614 +0000 UTC m=+0.084338346 container died 26cb218dcb89d1403d516e7d48026e822707a5922d697a1f3f7b5c3648a70d26 (image=quay.io/ceph/ceph:v19, name=gifted_cohen, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:33:20 compute-0 podman[87598]: 2025-11-25 09:33:20.742747835 +0000 UTC m=+0.100926566 container remove 26cb218dcb89d1403d516e7d48026e822707a5922d697a1f3f7b5c3648a70d26 (image=quay.io/ceph/ceph:v19, name=gifted_cohen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:20 compute-0 podman[87598]: 2025-11-25 09:33:20.65850991 +0000 UTC m=+0.016688661 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:20 compute-0 podman[87614]: 2025-11-25 09:33:20.752064167 +0000 UTC m=+0.036898979 container create f38f3609693cc5db6d44cf8b829deaaa9f4b7988bc904d104e9bab6b03236f10 (image=quay.io/ceph/ceph:v19, name=happy_snyder, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:20 compute-0 systemd[1]: libpod-conmon-26cb218dcb89d1403d516e7d48026e822707a5922d697a1f3f7b5c3648a70d26.scope: Deactivated successfully.
Nov 25 09:33:20 compute-0 systemd[1]: Started libpod-conmon-f38f3609693cc5db6d44cf8b829deaaa9f4b7988bc904d104e9bab6b03236f10.scope.
Nov 25 09:33:20 compute-0 sudo[87522]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10744186aee6d2ad8544a7eea11e8dafb704a0e05828e35e134b90e84435d39a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10744186aee6d2ad8544a7eea11e8dafb704a0e05828e35e134b90e84435d39a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10744186aee6d2ad8544a7eea11e8dafb704a0e05828e35e134b90e84435d39a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:20 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Nov 25 09:33:20 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Nov 25 09:33:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:33:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:20 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Nov 25 09:33:20 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Nov 25 09:33:20 compute-0 podman[87614]: 2025-11-25 09:33:20.803642476 +0000 UTC m=+0.088477289 container init f38f3609693cc5db6d44cf8b829deaaa9f4b7988bc904d104e9bab6b03236f10 (image=quay.io/ceph/ceph:v19, name=happy_snyder, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:20 compute-0 podman[87614]: 2025-11-25 09:33:20.807937201 +0000 UTC m=+0.092772014 container start f38f3609693cc5db6d44cf8b829deaaa9f4b7988bc904d104e9bab6b03236f10 (image=quay.io/ceph/ceph:v19, name=happy_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:20 compute-0 podman[87614]: 2025-11-25 09:33:20.808833781 +0000 UTC m=+0.093668594 container attach f38f3609693cc5db6d44cf8b829deaaa9f4b7988bc904d104e9bab6b03236f10 (image=quay.io/ceph/ceph:v19, name=happy_snyder, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:33:20 compute-0 podman[87614]: 2025-11-25 09:33:20.733278123 +0000 UTC m=+0.018112956 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:20 compute-0 sudo[87644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:20 compute-0 sudo[87644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:20 compute-0 sudo[87644]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:20 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:20 compute-0 sudo[87669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:20 compute-0 sudo[87669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:21 compute-0 ceph-mon[74207]: osdmap e26: 3 total, 3 up, 3 in
Nov 25 09:33:21 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-0 ceph-mon[74207]: Reconfiguring mgr.compute-0.zcfgby (monmap changed)...
Nov 25 09:33:21 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zcfgby", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 09:33:21 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:33:21 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:21 compute-0 ceph-mon[74207]: Reconfiguring daemon mgr.compute-0.zcfgby on compute-0
Nov 25 09:33:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/736770130' entity='client.admin' 
Nov 25 09:33:21 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-0 ceph-mon[74207]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 25 09:33:21 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:33:21 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:21 compute-0 ceph-mon[74207]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 25 09:33:21 compute-0 ceph-mon[74207]: pgmap v73: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd334f12205084317079434c58fe2b37bf8e17e5c5ec41fb03c0645b324a9c43-merged.mount: Deactivated successfully.
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 25 09:33:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Nov 25 09:33:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 25 09:33:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-0 happy_snyder[87640]: Scheduled rgw.rgw update...
Nov 25 09:33:21 compute-0 happy_snyder[87640]: Scheduled ingress.rgw.default update...
Nov 25 09:33:21 compute-0 podman[87727]: 2025-11-25 09:33:21.110800313 +0000 UTC m=+0.029490212 container create 66b10bd3fd04c6bdd802ea79eb5f219f885498f1df5f855592921c265a8537e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 25 09:33:21 compute-0 systemd[1]: libpod-f38f3609693cc5db6d44cf8b829deaaa9f4b7988bc904d104e9bab6b03236f10.scope: Deactivated successfully.
Nov 25 09:33:21 compute-0 systemd[1]: Started libpod-conmon-66b10bd3fd04c6bdd802ea79eb5f219f885498f1df5f855592921c265a8537e7.scope.
Nov 25 09:33:21 compute-0 podman[87740]: 2025-11-25 09:33:21.148917436 +0000 UTC m=+0.020163020 container died f38f3609693cc5db6d44cf8b829deaaa9f4b7988bc904d104e9bab6b03236f10 (image=quay.io/ceph/ceph:v19, name=happy_snyder, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:33:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-10744186aee6d2ad8544a7eea11e8dafb704a0e05828e35e134b90e84435d39a-merged.mount: Deactivated successfully.
Nov 25 09:33:21 compute-0 podman[87727]: 2025-11-25 09:33:21.163784853 +0000 UTC m=+0.082474752 container init 66b10bd3fd04c6bdd802ea79eb5f219f885498f1df5f855592921c265a8537e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 25 09:33:21 compute-0 podman[87740]: 2025-11-25 09:33:21.16532732 +0000 UTC m=+0.036572884 container remove f38f3609693cc5db6d44cf8b829deaaa9f4b7988bc904d104e9bab6b03236f10 (image=quay.io/ceph/ceph:v19, name=happy_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:33:21 compute-0 systemd[1]: libpod-conmon-f38f3609693cc5db6d44cf8b829deaaa9f4b7988bc904d104e9bab6b03236f10.scope: Deactivated successfully.
Nov 25 09:33:21 compute-0 podman[87727]: 2025-11-25 09:33:21.168329762 +0000 UTC m=+0.087019650 container start 66b10bd3fd04c6bdd802ea79eb5f219f885498f1df5f855592921c265a8537e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hellman, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:21 compute-0 podman[87727]: 2025-11-25 09:33:21.170231045 +0000 UTC m=+0.088920935 container attach 66b10bd3fd04c6bdd802ea79eb5f219f885498f1df5f855592921c265a8537e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:33:21 compute-0 romantic_hellman[87750]: 167 167
Nov 25 09:33:21 compute-0 systemd[1]: libpod-66b10bd3fd04c6bdd802ea79eb5f219f885498f1df5f855592921c265a8537e7.scope: Deactivated successfully.
Nov 25 09:33:21 compute-0 podman[87727]: 2025-11-25 09:33:21.171824228 +0000 UTC m=+0.090514117 container died 66b10bd3fd04c6bdd802ea79eb5f219f885498f1df5f855592921c265a8537e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:33:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b42395383a3da4f45c97430a1277f7ac19f7cfbca1e48ccd6e6c489793dd8689-merged.mount: Deactivated successfully.
Nov 25 09:33:21 compute-0 sudo[87581]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:21 compute-0 podman[87727]: 2025-11-25 09:33:21.189556295 +0000 UTC m=+0.108246184 container remove 66b10bd3fd04c6bdd802ea79eb5f219f885498f1df5f855592921c265a8537e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hellman, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:21 compute-0 podman[87727]: 2025-11-25 09:33:21.100245657 +0000 UTC m=+0.018935566 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:21 compute-0 systemd[1]: libpod-conmon-66b10bd3fd04c6bdd802ea79eb5f219f885498f1df5f855592921c265a8537e7.scope: Deactivated successfully.
Nov 25 09:33:21 compute-0 sudo[87669]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Nov 25 09:33:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Nov 25 09:33:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 09:33:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Nov 25 09:33:21 compute-0 sudo[87769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:21 compute-0 sudo[87769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:21 compute-0 sudo[87769]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:21 compute-0 sudo[87801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:21 compute-0 sudo[87801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:21 compute-0 python3[87894]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:33:21 compute-0 podman[87909]: 2025-11-25 09:33:21.546353326 +0000 UTC m=+0.026848934 container create 0ce80d7f738e46cc5ad604190bf35e0f92f0146c11b80593b48e860bac8ba191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bhaskara, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 09:33:21 compute-0 systemd[1]: Started libpod-conmon-0ce80d7f738e46cc5ad604190bf35e0f92f0146c11b80593b48e860bac8ba191.scope.
Nov 25 09:33:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:21 compute-0 podman[87909]: 2025-11-25 09:33:21.588785845 +0000 UTC m=+0.069281463 container init 0ce80d7f738e46cc5ad604190bf35e0f92f0146c11b80593b48e860bac8ba191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bhaskara, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:33:21 compute-0 podman[87909]: 2025-11-25 09:33:21.593968474 +0000 UTC m=+0.074464082 container start 0ce80d7f738e46cc5ad604190bf35e0f92f0146c11b80593b48e860bac8ba191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:33:21 compute-0 podman[87909]: 2025-11-25 09:33:21.595860381 +0000 UTC m=+0.076356009 container attach 0ce80d7f738e46cc5ad604190bf35e0f92f0146c11b80593b48e860bac8ba191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:33:21 compute-0 objective_bhaskara[87946]: 167 167
Nov 25 09:33:21 compute-0 systemd[1]: libpod-0ce80d7f738e46cc5ad604190bf35e0f92f0146c11b80593b48e860bac8ba191.scope: Deactivated successfully.
Nov 25 09:33:21 compute-0 podman[87909]: 2025-11-25 09:33:21.598409646 +0000 UTC m=+0.078905254 container died 0ce80d7f738e46cc5ad604190bf35e0f92f0146c11b80593b48e860bac8ba191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bhaskara, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:33:21 compute-0 podman[87909]: 2025-11-25 09:33:21.616656833 +0000 UTC m=+0.097152441 container remove 0ce80d7f738e46cc5ad604190bf35e0f92f0146c11b80593b48e860bac8ba191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bhaskara, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:21 compute-0 podman[87909]: 2025-11-25 09:33:21.535245347 +0000 UTC m=+0.015740975 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:21 compute-0 systemd[1]: libpod-conmon-0ce80d7f738e46cc5ad604190bf35e0f92f0146c11b80593b48e860bac8ba191.scope: Deactivated successfully.
Nov 25 09:33:21 compute-0 sudo[87801]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Nov 25 09:33:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 25 09:33:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:33:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Nov 25 09:33:21 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Nov 25 09:33:21 compute-0 python3[88009]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063201.3012211-37599-262062957299640/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5e3fb15cced8da8ec10faf7785c4efbbc2aefcce3f94b3809006ce062c0ee02-merged.mount: Deactivated successfully.
Nov 25 09:33:22 compute-0 ceph-mon[74207]: from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:22 compute-0 ceph-mon[74207]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:22 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mon[74207]: Saving service ingress.rgw.default spec with placement count:2
Nov 25 09:33:22 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mon[74207]: Reconfiguring osd.1 (monmap changed)...
Nov 25 09:33:22 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 09:33:22 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:22 compute-0 ceph-mon[74207]: Reconfiguring daemon osd.1 on compute-0
Nov 25 09:33:22 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mon[74207]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 25 09:33:22 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 09:33:22 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:22 compute-0 ceph-mon[74207]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Nov 25 09:33:22 compute-0 sudo[88062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyueolyxzixtkfedprlxtkmndutabogt ; /usr/bin/python3'
Nov 25 09:33:22 compute-0 sudo[88062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:22 compute-0 python3[88064]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:22 compute-0 podman[88065]: 2025-11-25 09:33:22.324129486 +0000 UTC m=+0.027150872 container create db16b65e02df723337cc8332823375f4bcd166bc6feca9840c61bd0c1846c564 (image=quay.io/ceph/ceph:v19, name=intelligent_margulis, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:22 compute-0 systemd[1]: Started libpod-conmon-db16b65e02df723337cc8332823375f4bcd166bc6feca9840c61bd0c1846c564.scope.
Nov 25 09:33:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88264d59875a86f6748e6a94d4209e6c4fc332c7e69375eaa9eb9625e3dc8097/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88264d59875a86f6748e6a94d4209e6c4fc332c7e69375eaa9eb9625e3dc8097/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88264d59875a86f6748e6a94d4209e6c4fc332c7e69375eaa9eb9625e3dc8097/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:22 compute-0 podman[88065]: 2025-11-25 09:33:22.370917405 +0000 UTC m=+0.073938811 container init db16b65e02df723337cc8332823375f4bcd166bc6feca9840c61bd0c1846c564 (image=quay.io/ceph/ceph:v19, name=intelligent_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:22 compute-0 podman[88065]: 2025-11-25 09:33:22.375948569 +0000 UTC m=+0.078969955 container start db16b65e02df723337cc8332823375f4bcd166bc6feca9840c61bd0c1846c564 (image=quay.io/ceph/ceph:v19, name=intelligent_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:33:22 compute-0 podman[88065]: 2025-11-25 09:33:22.377171745 +0000 UTC m=+0.080193132 container attach db16b65e02df723337cc8332823375f4bcd166bc6feca9840c61bd0c1846c564 (image=quay.io/ceph/ceph:v19, name=intelligent_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 25 09:33:22 compute-0 podman[88065]: 2025-11-25 09:33:22.312967185 +0000 UTC m=+0.015988591 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14325 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service node-exporter spec with placement *
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:22 compute-0 intelligent_margulis[88078]: Scheduled node-exporter update...
Nov 25 09:33:22 compute-0 intelligent_margulis[88078]: Scheduled grafana update...
Nov 25 09:33:22 compute-0 intelligent_margulis[88078]: Scheduled prometheus update...
Nov 25 09:33:22 compute-0 intelligent_margulis[88078]: Scheduled alertmanager update...
Nov 25 09:33:22 compute-0 systemd[1]: libpod-db16b65e02df723337cc8332823375f4bcd166bc6feca9840c61bd0c1846c564.scope: Deactivated successfully.
Nov 25 09:33:22 compute-0 conmon[88078]: conmon db16b65e02df723337cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db16b65e02df723337cc8332823375f4bcd166bc6feca9840c61bd0c1846c564.scope/container/memory.events
Nov 25 09:33:22 compute-0 podman[88065]: 2025-11-25 09:33:22.678267205 +0000 UTC m=+0.381288591 container died db16b65e02df723337cc8332823375f4bcd166bc6feca9840c61bd0c1846c564 (image=quay.io/ceph/ceph:v19, name=intelligent_margulis, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-88264d59875a86f6748e6a94d4209e6c4fc332c7e69375eaa9eb9625e3dc8097-merged.mount: Deactivated successfully.
Nov 25 09:33:22 compute-0 podman[88065]: 2025-11-25 09:33:22.697844699 +0000 UTC m=+0.400866085 container remove db16b65e02df723337cc8332823375f4bcd166bc6feca9840c61bd0c1846c564 (image=quay.io/ceph/ceph:v19, name=intelligent_margulis, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:22 compute-0 sudo[88062]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:22 compute-0 systemd[1]: libpod-conmon-db16b65e02df723337cc8332823375f4bcd166bc6feca9840c61bd0c1846c564.scope: Deactivated successfully.
Nov 25 09:33:22 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Nov 25 09:33:23 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Nov 25 09:33:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Nov 25 09:33:23 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Nov 25 09:33:23 compute-0 sudo[88136]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gygkrnhwsifzpafvapuwgfezkaebhfny ; /usr/bin/python3'
Nov 25 09:33:23 compute-0 sudo[88136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: Reconfiguring osd.0 (monmap changed)...
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mon[74207]: Reconfiguring daemon osd.0 on compute-1
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mon[74207]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='client.14325 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mon[74207]: Saving service node-exporter spec with placement *
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: Saving service grafana spec with placement compute-0;count:1
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: Saving service prometheus spec with placement compute-0;count:1
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: Saving service alertmanager spec with placement compute-0;count:1
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: pgmap v74: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:23 compute-0 python3[88138]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:23 compute-0 podman[88139]: 2025-11-25 09:33:23.177793311 +0000 UTC m=+0.027669360 container create fbfcaae13c90dc28fc9655b6c2a49a875cb7d9d832933585e129bd5602c2442a (image=quay.io/ceph/ceph:v19, name=affectionate_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 25 09:33:23 compute-0 systemd[1]: Started libpod-conmon-fbfcaae13c90dc28fc9655b6c2a49a875cb7d9d832933585e129bd5602c2442a.scope.
Nov 25 09:33:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4293f39cf59e0e38d4b31e58b8a76d695bc1c123f82b5922befd558965305dca/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4293f39cf59e0e38d4b31e58b8a76d695bc1c123f82b5922befd558965305dca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4293f39cf59e0e38d4b31e58b8a76d695bc1c123f82b5922befd558965305dca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:23 compute-0 podman[88139]: 2025-11-25 09:33:23.238105885 +0000 UTC m=+0.087981955 container init fbfcaae13c90dc28fc9655b6c2a49a875cb7d9d832933585e129bd5602c2442a (image=quay.io/ceph/ceph:v19, name=affectionate_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 09:33:23 compute-0 podman[88139]: 2025-11-25 09:33:23.24201806 +0000 UTC m=+0.091894109 container start fbfcaae13c90dc28fc9655b6c2a49a875cb7d9d832933585e129bd5602c2442a (image=quay.io/ceph/ceph:v19, name=affectionate_haibt, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:23 compute-0 podman[88139]: 2025-11-25 09:33:23.244396043 +0000 UTC m=+0.094272092 container attach fbfcaae13c90dc28fc9655b6c2a49a875cb7d9d832933585e129bd5602c2442a (image=quay.io/ceph/ceph:v19, name=affectionate_haibt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:23 compute-0 podman[88139]: 2025-11-25 09:33:23.166696703 +0000 UTC m=+0.016572772 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.flybft (monmap changed)...
Nov 25 09:33:23 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.flybft (monmap changed)...
Nov 25 09:33:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.flybft", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.flybft", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:23 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.flybft on compute-2
Nov 25 09:33:23 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.flybft on compute-2
Nov 25 09:33:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3510657336' entity='client.admin' 
Nov 25 09:33:23 compute-0 systemd[1]: libpod-fbfcaae13c90dc28fc9655b6c2a49a875cb7d9d832933585e129bd5602c2442a.scope: Deactivated successfully.
Nov 25 09:33:23 compute-0 conmon[88151]: conmon fbfcaae13c90dc28fc96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbfcaae13c90dc28fc9655b6c2a49a875cb7d9d832933585e129bd5602c2442a.scope/container/memory.events
Nov 25 09:33:23 compute-0 podman[88176]: 2025-11-25 09:33:23.558732412 +0000 UTC m=+0.016327510 container died fbfcaae13c90dc28fc9655b6c2a49a875cb7d9d832933585e129bd5602c2442a (image=quay.io/ceph/ceph:v19, name=affectionate_haibt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 09:33:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4293f39cf59e0e38d4b31e58b8a76d695bc1c123f82b5922befd558965305dca-merged.mount: Deactivated successfully.
Nov 25 09:33:23 compute-0 podman[88176]: 2025-11-25 09:33:23.578400138 +0000 UTC m=+0.035995215 container remove fbfcaae13c90dc28fc9655b6c2a49a875cb7d9d832933585e129bd5602c2442a (image=quay.io/ceph/ceph:v19, name=affectionate_haibt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 09:33:23 compute-0 systemd[1]: libpod-conmon-fbfcaae13c90dc28fc9655b6c2a49a875cb7d9d832933585e129bd5602c2442a.scope: Deactivated successfully.
Nov 25 09:33:23 compute-0 sudo[88136]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:23 compute-0 sudo[88210]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjynvdnhzkyebqqyebwlqsuvsrcnrdxl ; /usr/bin/python3'
Nov 25 09:33:23 compute-0 sudo[88210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:23 compute-0 python3[88212]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:23 compute-0 podman[88213]: 2025-11-25 09:33:23.870015025 +0000 UTC m=+0.028142892 container create e7ebd6298cc792ec88f7df627818273cdcf7c7df314d4457868c8f3d22f1e690 (image=quay.io/ceph/ceph:v19, name=trusting_hawking, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:23 compute-0 systemd[1]: Started libpod-conmon-e7ebd6298cc792ec88f7df627818273cdcf7c7df314d4457868c8f3d22f1e690.scope.
Nov 25 09:33:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c414d16d8b1d31b38ba9c41ccc7c3714511da046672cb294cdf3f3a6b9e3840/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c414d16d8b1d31b38ba9c41ccc7c3714511da046672cb294cdf3f3a6b9e3840/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c414d16d8b1d31b38ba9c41ccc7c3714511da046672cb294cdf3f3a6b9e3840/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:23 compute-0 podman[88213]: 2025-11-25 09:33:23.925697401 +0000 UTC m=+0.083825288 container init e7ebd6298cc792ec88f7df627818273cdcf7c7df314d4457868c8f3d22f1e690 (image=quay.io/ceph/ceph:v19, name=trusting_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:23 compute-0 podman[88213]: 2025-11-25 09:33:23.929668306 +0000 UTC m=+0.087796172 container start e7ebd6298cc792ec88f7df627818273cdcf7c7df314d4457868c8f3d22f1e690 (image=quay.io/ceph/ceph:v19, name=trusting_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:23 compute-0 podman[88213]: 2025-11-25 09:33:23.930933261 +0000 UTC m=+0.089061147 container attach e7ebd6298cc792ec88f7df627818273cdcf7c7df314d4457868c8f3d22f1e690 (image=quay.io/ceph/ceph:v19, name=trusting_hawking, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:23 compute-0 podman[88213]: 2025-11-25 09:33:23.858971827 +0000 UTC m=+0.017099715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2358325226' entity='client.admin' 
Nov 25 09:33:24 compute-0 systemd[1]: libpod-e7ebd6298cc792ec88f7df627818273cdcf7c7df314d4457868c8f3d22f1e690.scope: Deactivated successfully.
Nov 25 09:33:24 compute-0 podman[88213]: 2025-11-25 09:33:24.223835454 +0000 UTC m=+0.381963322 container died e7ebd6298cc792ec88f7df627818273cdcf7c7df314d4457868c8f3d22f1e690 (image=quay.io/ceph/ceph:v19, name=trusting_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:33:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c414d16d8b1d31b38ba9c41ccc7c3714511da046672cb294cdf3f3a6b9e3840-merged.mount: Deactivated successfully.
Nov 25 09:33:24 compute-0 podman[88213]: 2025-11-25 09:33:24.243862949 +0000 UTC m=+0.401990815 container remove e7ebd6298cc792ec88f7df627818273cdcf7c7df314d4457868c8f3d22f1e690 (image=quay.io/ceph/ceph:v19, name=trusting_hawking, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:24 compute-0 sudo[88210]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:24 compute-0 systemd[1]: libpod-conmon-e7ebd6298cc792ec88f7df627818273cdcf7c7df314d4457868c8f3d22f1e690.scope: Deactivated successfully.
Nov 25 09:33:24 compute-0 sudo[88283]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfsywceiwhoyotxyimnbxuvkdzdlssyd ; /usr/bin/python3'
Nov 25 09:33:24 compute-0 sudo[88283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 25 09:33:24 compute-0 ceph-mon[74207]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 25 09:33:24 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-0 ceph-mon[74207]: Reconfiguring mgr.compute-2.flybft (monmap changed)...
Nov 25 09:33:24 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.flybft", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 09:33:24 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 09:33:24 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:24 compute-0 ceph-mon[74207]: Reconfiguring daemon mgr.compute-2.flybft on compute-2
Nov 25 09:33:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3510657336' entity='client.admin' 
Nov 25 09:33:24 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2358325226' entity='client.admin' 
Nov 25 09:33:24 compute-0 python3[88285]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:24 compute-0 podman[88286]: 2025-11-25 09:33:24.56082113 +0000 UTC m=+0.024590015 container create be9783de126ffd18342fa7667a17660f9f93703d277acbe83774f847e376e270 (image=quay.io/ceph/ceph:v19, name=magical_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:33:24 compute-0 systemd[1]: Started libpod-conmon-be9783de126ffd18342fa7667a17660f9f93703d277acbe83774f847e376e270.scope.
Nov 25 09:33:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66516e7e008f70ee8ab92df8e3ad94dd7b12da29f7f86c75d62acf8e11ef988/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66516e7e008f70ee8ab92df8e3ad94dd7b12da29f7f86c75d62acf8e11ef988/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66516e7e008f70ee8ab92df8e3ad94dd7b12da29f7f86c75d62acf8e11ef988/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-0 podman[88286]: 2025-11-25 09:33:24.613090302 +0000 UTC m=+0.076859177 container init be9783de126ffd18342fa7667a17660f9f93703d277acbe83774f847e376e270 (image=quay.io/ceph/ceph:v19, name=magical_goodall, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:24 compute-0 podman[88286]: 2025-11-25 09:33:24.616822196 +0000 UTC m=+0.080591071 container start be9783de126ffd18342fa7667a17660f9f93703d277acbe83774f847e376e270 (image=quay.io/ceph/ceph:v19, name=magical_goodall, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 25 09:33:24 compute-0 podman[88286]: 2025-11-25 09:33:24.618046494 +0000 UTC m=+0.081815369 container attach be9783de126ffd18342fa7667a17660f9f93703d277acbe83774f847e376e270 (image=quay.io/ceph/ceph:v19, name=magical_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:33:24 compute-0 podman[88286]: 2025-11-25 09:33:24.550626473 +0000 UTC m=+0.014395348 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:24 compute-0 sudo[88321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:24 compute-0 sudo[88321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:24 compute-0 sudo[88321]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:24 compute-0 sudo[88346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:33:24 compute-0 sudo[88346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:24 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Nov 25 09:33:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3373333714' entity='client.admin' 
Nov 25 09:33:24 compute-0 systemd[1]: libpod-be9783de126ffd18342fa7667a17660f9f93703d277acbe83774f847e376e270.scope: Deactivated successfully.
Nov 25 09:33:24 compute-0 podman[88286]: 2025-11-25 09:33:24.90919939 +0000 UTC m=+0.372968266 container died be9783de126ffd18342fa7667a17660f9f93703d277acbe83774f847e376e270 (image=quay.io/ceph/ceph:v19, name=magical_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:33:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c66516e7e008f70ee8ab92df8e3ad94dd7b12da29f7f86c75d62acf8e11ef988-merged.mount: Deactivated successfully.
Nov 25 09:33:24 compute-0 podman[88286]: 2025-11-25 09:33:24.928910057 +0000 UTC m=+0.392678942 container remove be9783de126ffd18342fa7667a17660f9f93703d277acbe83774f847e376e270 (image=quay.io/ceph/ceph:v19, name=magical_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:33:24 compute-0 sudo[88283]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:24 compute-0 systemd[1]: libpod-conmon-be9783de126ffd18342fa7667a17660f9f93703d277acbe83774f847e376e270.scope: Deactivated successfully.
Nov 25 09:33:25 compute-0 podman[88412]: 2025-11-25 09:33:25.10297822 +0000 UTC m=+0.029641939 container create 8d8505705a340a17e7a2e92978ab4ded691aaeac27347c3765aad2e4b7d233c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hermann, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Nov 25 09:33:25 compute-0 systemd[1]: Started libpod-conmon-8d8505705a340a17e7a2e92978ab4ded691aaeac27347c3765aad2e4b7d233c7.scope.
Nov 25 09:33:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:25 compute-0 podman[88412]: 2025-11-25 09:33:25.145775696 +0000 UTC m=+0.072439435 container init 8d8505705a340a17e7a2e92978ab4ded691aaeac27347c3765aad2e4b7d233c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:25 compute-0 podman[88412]: 2025-11-25 09:33:25.150122691 +0000 UTC m=+0.076786410 container start 8d8505705a340a17e7a2e92978ab4ded691aaeac27347c3765aad2e4b7d233c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:33:25 compute-0 podman[88412]: 2025-11-25 09:33:25.15138983 +0000 UTC m=+0.078053549 container attach 8d8505705a340a17e7a2e92978ab4ded691aaeac27347c3765aad2e4b7d233c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hermann, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:25 compute-0 interesting_hermann[88425]: 167 167
Nov 25 09:33:25 compute-0 systemd[1]: libpod-8d8505705a340a17e7a2e92978ab4ded691aaeac27347c3765aad2e4b7d233c7.scope: Deactivated successfully.
Nov 25 09:33:25 compute-0 podman[88412]: 2025-11-25 09:33:25.15343279 +0000 UTC m=+0.080096510 container died 8d8505705a340a17e7a2e92978ab4ded691aaeac27347c3765aad2e4b7d233c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:33:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c1ae26a7b0c2831083380bc0f8d37b09851db84b4a1b105b05b77ec38500ca0-merged.mount: Deactivated successfully.
Nov 25 09:33:25 compute-0 podman[88412]: 2025-11-25 09:33:25.17050321 +0000 UTC m=+0.097166929 container remove 8d8505705a340a17e7a2e92978ab4ded691aaeac27347c3765aad2e4b7d233c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hermann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:33:25 compute-0 podman[88412]: 2025-11-25 09:33:25.091875641 +0000 UTC m=+0.018539380 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:25 compute-0 systemd[1]: libpod-conmon-8d8505705a340a17e7a2e92978ab4ded691aaeac27347c3765aad2e4b7d233c7.scope: Deactivated successfully.
Nov 25 09:33:25 compute-0 podman[88448]: 2025-11-25 09:33:25.283755279 +0000 UTC m=+0.028890351 container create 099a5f2b84207b02730b2abb54c6403efcef57160ff4b75a715e79c2bceca4dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_nightingale, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:25 compute-0 systemd[1]: Started libpod-conmon-099a5f2b84207b02730b2abb54c6403efcef57160ff4b75a715e79c2bceca4dd.scope.
Nov 25 09:33:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b119439ece56f60a7a591ca271fc7d146f5e357dc22a6fed4c320cd2b8cb98d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b119439ece56f60a7a591ca271fc7d146f5e357dc22a6fed4c320cd2b8cb98d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b119439ece56f60a7a591ca271fc7d146f5e357dc22a6fed4c320cd2b8cb98d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b119439ece56f60a7a591ca271fc7d146f5e357dc22a6fed4c320cd2b8cb98d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b119439ece56f60a7a591ca271fc7d146f5e357dc22a6fed4c320cd2b8cb98d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:25 compute-0 sudo[88487]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stlzwxonyuoflxilcsnpvzdejitgmmro ; /usr/bin/python3'
Nov 25 09:33:25 compute-0 sudo[88487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:25 compute-0 podman[88448]: 2025-11-25 09:33:25.347701162 +0000 UTC m=+0.092836244 container init 099a5f2b84207b02730b2abb54c6403efcef57160ff4b75a715e79c2bceca4dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_nightingale, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:33:25 compute-0 podman[88448]: 2025-11-25 09:33:25.353069792 +0000 UTC m=+0.098204855 container start 099a5f2b84207b02730b2abb54c6403efcef57160ff4b75a715e79c2bceca4dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_nightingale, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:25 compute-0 podman[88448]: 2025-11-25 09:33:25.355207412 +0000 UTC m=+0.100342494 container attach 099a5f2b84207b02730b2abb54c6403efcef57160ff4b75a715e79c2bceca4dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_nightingale, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:33:25 compute-0 podman[88448]: 2025-11-25 09:33:25.271973599 +0000 UTC m=+0.017108671 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:25 compute-0 python3[88489]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:25 compute-0 sudo[88487]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:25 compute-0 musing_nightingale[88468]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:33:25 compute-0 musing_nightingale[88468]: --> All data devices are unavailable
Nov 25 09:33:25 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:25 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:25 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:25 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:25 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:25 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:25 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:25 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:25 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:25 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:25 compute-0 ceph-mon[74207]: pgmap v75: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3373333714' entity='client.admin' 
Nov 25 09:33:25 compute-0 systemd[1]: libpod-099a5f2b84207b02730b2abb54c6403efcef57160ff4b75a715e79c2bceca4dd.scope: Deactivated successfully.
Nov 25 09:33:25 compute-0 podman[88448]: 2025-11-25 09:33:25.606010366 +0000 UTC m=+0.351145428 container died 099a5f2b84207b02730b2abb54c6403efcef57160ff4b75a715e79c2bceca4dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:33:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b119439ece56f60a7a591ca271fc7d146f5e357dc22a6fed4c320cd2b8cb98d9-merged.mount: Deactivated successfully.
Nov 25 09:33:25 compute-0 podman[88448]: 2025-11-25 09:33:25.626971158 +0000 UTC m=+0.372106221 container remove 099a5f2b84207b02730b2abb54c6403efcef57160ff4b75a715e79c2bceca4dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_nightingale, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:25 compute-0 systemd[1]: libpod-conmon-099a5f2b84207b02730b2abb54c6403efcef57160ff4b75a715e79c2bceca4dd.scope: Deactivated successfully.
Nov 25 09:33:25 compute-0 sudo[88346]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:25 compute-0 sudo[88521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:25 compute-0 sudo[88521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:25 compute-0 sudo[88521]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:25 compute-0 sudo[88546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:33:25 compute-0 sudo[88546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:25 compute-0 sudo[88594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjiqgwzjnphanypbywvxpvezfkwsfexf ; /usr/bin/python3'
Nov 25 09:33:25 compute-0 sudo[88594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:25 compute-0 python3[88596]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.zcfgby/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:25 compute-0 podman[88597]: 2025-11-25 09:33:25.908727011 +0000 UTC m=+0.029860992 container create da21050ec04cf8a87c6df3383d11967eaf7029f81b9edef8cf30b35ec930214f (image=quay.io/ceph/ceph:v19, name=sharp_rubin, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:25 compute-0 systemd[1]: Started libpod-conmon-da21050ec04cf8a87c6df3383d11967eaf7029f81b9edef8cf30b35ec930214f.scope.
Nov 25 09:33:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33aee6d8617641bfff4d02ee9d4e75008f367a7937843691426e71bd8e742ea/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33aee6d8617641bfff4d02ee9d4e75008f367a7937843691426e71bd8e742ea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33aee6d8617641bfff4d02ee9d4e75008f367a7937843691426e71bd8e742ea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:25 compute-0 podman[88597]: 2025-11-25 09:33:25.964713891 +0000 UTC m=+0.085847882 container init da21050ec04cf8a87c6df3383d11967eaf7029f81b9edef8cf30b35ec930214f (image=quay.io/ceph/ceph:v19, name=sharp_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 25 09:33:25 compute-0 podman[88597]: 2025-11-25 09:33:25.96937666 +0000 UTC m=+0.090510641 container start da21050ec04cf8a87c6df3383d11967eaf7029f81b9edef8cf30b35ec930214f (image=quay.io/ceph/ceph:v19, name=sharp_rubin, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:25 compute-0 podman[88597]: 2025-11-25 09:33:25.970639851 +0000 UTC m=+0.091773833 container attach da21050ec04cf8a87c6df3383d11967eaf7029f81b9edef8cf30b35ec930214f (image=quay.io/ceph/ceph:v19, name=sharp_rubin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:33:25 compute-0 podman[88597]: 2025-11-25 09:33:25.895815502 +0000 UTC m=+0.016949493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:26 compute-0 podman[88644]: 2025-11-25 09:33:26.048260813 +0000 UTC m=+0.028279840 container create e0370a7eea3d37e79915a7aa6bcbb283c96393aec1a0686ec52b9983e3a3e764 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:26 compute-0 systemd[1]: Started libpod-conmon-e0370a7eea3d37e79915a7aa6bcbb283c96393aec1a0686ec52b9983e3a3e764.scope.
Nov 25 09:33:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:26 compute-0 podman[88644]: 2025-11-25 09:33:26.116033782 +0000 UTC m=+0.096052808 container init e0370a7eea3d37e79915a7aa6bcbb283c96393aec1a0686ec52b9983e3a3e764 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_curie, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:33:26 compute-0 podman[88644]: 2025-11-25 09:33:26.120700277 +0000 UTC m=+0.100719305 container start e0370a7eea3d37e79915a7aa6bcbb283c96393aec1a0686ec52b9983e3a3e764 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_curie, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:26 compute-0 podman[88644]: 2025-11-25 09:33:26.122514347 +0000 UTC m=+0.102533394 container attach e0370a7eea3d37e79915a7aa6bcbb283c96393aec1a0686ec52b9983e3a3e764 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_curie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:26 compute-0 dreamy_curie[88676]: 167 167
Nov 25 09:33:26 compute-0 systemd[1]: libpod-e0370a7eea3d37e79915a7aa6bcbb283c96393aec1a0686ec52b9983e3a3e764.scope: Deactivated successfully.
Nov 25 09:33:26 compute-0 podman[88644]: 2025-11-25 09:33:26.12515757 +0000 UTC m=+0.105176597 container died e0370a7eea3d37e79915a7aa6bcbb283c96393aec1a0686ec52b9983e3a3e764 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 09:33:26 compute-0 podman[88644]: 2025-11-25 09:33:26.037172972 +0000 UTC m=+0.017192019 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0640308ec6c4ae65f32dee5263185228b1e160fe30c57208234238eb87097649-merged.mount: Deactivated successfully.
Nov 25 09:33:26 compute-0 podman[88644]: 2025-11-25 09:33:26.153710695 +0000 UTC m=+0.133729722 container remove e0370a7eea3d37e79915a7aa6bcbb283c96393aec1a0686ec52b9983e3a3e764 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:33:26 compute-0 systemd[1]: libpod-conmon-e0370a7eea3d37e79915a7aa6bcbb283c96393aec1a0686ec52b9983e3a3e764.scope: Deactivated successfully.
Nov 25 09:33:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.zcfgby/server_addr}] v 0)
Nov 25 09:33:26 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3012149904' entity='client.admin' 
Nov 25 09:33:26 compute-0 systemd[1]: libpod-da21050ec04cf8a87c6df3383d11967eaf7029f81b9edef8cf30b35ec930214f.scope: Deactivated successfully.
Nov 25 09:33:26 compute-0 podman[88597]: 2025-11-25 09:33:26.257508983 +0000 UTC m=+0.378642964 container died da21050ec04cf8a87c6df3383d11967eaf7029f81b9edef8cf30b35ec930214f (image=quay.io/ceph/ceph:v19, name=sharp_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 09:33:26 compute-0 podman[88597]: 2025-11-25 09:33:26.280228932 +0000 UTC m=+0.401362913 container remove da21050ec04cf8a87c6df3383d11967eaf7029f81b9edef8cf30b35ec930214f (image=quay.io/ceph/ceph:v19, name=sharp_rubin, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:26 compute-0 podman[88698]: 2025-11-25 09:33:26.285735483 +0000 UTC m=+0.051116880 container create e3a19c80eccb955d992db5b93e717519d30c76f94c4deecd41f7fec6f0efa9c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 25 09:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a33aee6d8617641bfff4d02ee9d4e75008f367a7937843691426e71bd8e742ea-merged.mount: Deactivated successfully.
Nov 25 09:33:26 compute-0 systemd[1]: libpod-conmon-da21050ec04cf8a87c6df3383d11967eaf7029f81b9edef8cf30b35ec930214f.scope: Deactivated successfully.
Nov 25 09:33:26 compute-0 sudo[88594]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:26 compute-0 systemd[1]: Started libpod-conmon-e3a19c80eccb955d992db5b93e717519d30c76f94c4deecd41f7fec6f0efa9c1.scope.
Nov 25 09:33:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d64d4995613cac25e80b105201fbc185403310397890570cee1c9cf9c07e34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d64d4995613cac25e80b105201fbc185403310397890570cee1c9cf9c07e34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d64d4995613cac25e80b105201fbc185403310397890570cee1c9cf9c07e34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d64d4995613cac25e80b105201fbc185403310397890570cee1c9cf9c07e34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:26 compute-0 podman[88698]: 2025-11-25 09:33:26.26448267 +0000 UTC m=+0.029864087 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:26 compute-0 podman[88698]: 2025-11-25 09:33:26.356733419 +0000 UTC m=+0.122114837 container init e3a19c80eccb955d992db5b93e717519d30c76f94c4deecd41f7fec6f0efa9c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 09:33:26 compute-0 podman[88698]: 2025-11-25 09:33:26.361579123 +0000 UTC m=+0.126960521 container start e3a19c80eccb955d992db5b93e717519d30c76f94c4deecd41f7fec6f0efa9c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:26 compute-0 podman[88698]: 2025-11-25 09:33:26.362687122 +0000 UTC m=+0.128068520 container attach e3a19c80eccb955d992db5b93e717519d30c76f94c4deecd41f7fec6f0efa9c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hamilton, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:26 compute-0 clever_hamilton[88723]: {
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:     "1": [
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:         {
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             "devices": [
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "/dev/loop3"
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             ],
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             "lv_name": "ceph_lv0",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             "lv_size": "21470642176",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             "name": "ceph_lv0",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             "tags": {
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.cluster_name": "ceph",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.crush_device_class": "",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.encrypted": "0",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.osd_id": "1",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.type": "block",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.vdo": "0",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:                 "ceph.with_tpm": "0"
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             },
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             "type": "block",
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:             "vg_name": "ceph_vg0"
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:         }
Nov 25 09:33:26 compute-0 clever_hamilton[88723]:     ]
Nov 25 09:33:26 compute-0 clever_hamilton[88723]: }
Nov 25 09:33:26 compute-0 systemd[1]: libpod-e3a19c80eccb955d992db5b93e717519d30c76f94c4deecd41f7fec6f0efa9c1.scope: Deactivated successfully.
Nov 25 09:33:26 compute-0 podman[88698]: 2025-11-25 09:33:26.586693783 +0000 UTC m=+0.352075201 container died e3a19c80eccb955d992db5b93e717519d30c76f94c4deecd41f7fec6f0efa9c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 09:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-64d64d4995613cac25e80b105201fbc185403310397890570cee1c9cf9c07e34-merged.mount: Deactivated successfully.
Nov 25 09:33:26 compute-0 podman[88698]: 2025-11-25 09:33:26.608338565 +0000 UTC m=+0.373719963 container remove e3a19c80eccb955d992db5b93e717519d30c76f94c4deecd41f7fec6f0efa9c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:26 compute-0 systemd[1]: libpod-conmon-e3a19c80eccb955d992db5b93e717519d30c76f94c4deecd41f7fec6f0efa9c1.scope: Deactivated successfully.
Nov 25 09:33:26 compute-0 sudo[88546]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:26 compute-0 sudo[88741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:26 compute-0 sudo[88741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:26 compute-0 sudo[88741]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:26 compute-0 sudo[88766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:33:26 compute-0 sudo[88766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:26 compute-0 sudo[88814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvfhgsmimxrjufookjxfgcmcjdynlgsu ; /usr/bin/python3'
Nov 25 09:33:26 compute-0 sudo[88814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:26 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:26 compute-0 python3[88816]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.plffrn/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:26 compute-0 podman[88844]: 2025-11-25 09:33:26.975820516 +0000 UTC m=+0.033823822 container create 9eeae9d6e672961dda5e94a4eb7b3e79a16644aa6c793fd2ea9263849391e46d (image=quay.io/ceph/ceph:v19, name=focused_borg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 25 09:33:26 compute-0 podman[88855]: 2025-11-25 09:33:26.999094709 +0000 UTC m=+0.035759580 container create b5e25c0d36882a4cdaf3f28bee15e048800b20925b954f9d383a4990beb05ffe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:27 compute-0 systemd[1]: Started libpod-conmon-9eeae9d6e672961dda5e94a4eb7b3e79a16644aa6c793fd2ea9263849391e46d.scope.
Nov 25 09:33:27 compute-0 systemd[1]: Started libpod-conmon-b5e25c0d36882a4cdaf3f28bee15e048800b20925b954f9d383a4990beb05ffe.scope.
Nov 25 09:33:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18e4c762f3f7431d660c09be8f3fe5fe7cdac3581d20896a146b86f32e51de98/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18e4c762f3f7431d660c09be8f3fe5fe7cdac3581d20896a146b86f32e51de98/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18e4c762f3f7431d660c09be8f3fe5fe7cdac3581d20896a146b86f32e51de98/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:27 compute-0 podman[88844]: 2025-11-25 09:33:27.038401034 +0000 UTC m=+0.096404360 container init 9eeae9d6e672961dda5e94a4eb7b3e79a16644aa6c793fd2ea9263849391e46d (image=quay.io/ceph/ceph:v19, name=focused_borg, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:33:27 compute-0 podman[88855]: 2025-11-25 09:33:27.041463507 +0000 UTC m=+0.078128378 container init b5e25c0d36882a4cdaf3f28bee15e048800b20925b954f9d383a4990beb05ffe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 09:33:27 compute-0 podman[88844]: 2025-11-25 09:33:27.042720756 +0000 UTC m=+0.100724063 container start 9eeae9d6e672961dda5e94a4eb7b3e79a16644aa6c793fd2ea9263849391e46d (image=quay.io/ceph/ceph:v19, name=focused_borg, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:33:27 compute-0 podman[88844]: 2025-11-25 09:33:27.043661009 +0000 UTC m=+0.101664316 container attach 9eeae9d6e672961dda5e94a4eb7b3e79a16644aa6c793fd2ea9263849391e46d (image=quay.io/ceph/ceph:v19, name=focused_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:33:27 compute-0 podman[88855]: 2025-11-25 09:33:27.045488815 +0000 UTC m=+0.082153686 container start b5e25c0d36882a4cdaf3f28bee15e048800b20925b954f9d383a4990beb05ffe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 25 09:33:27 compute-0 podman[88855]: 2025-11-25 09:33:27.046571807 +0000 UTC m=+0.083236678 container attach b5e25c0d36882a4cdaf3f28bee15e048800b20925b954f9d383a4990beb05ffe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 25 09:33:27 compute-0 bold_knuth[88876]: 167 167
Nov 25 09:33:27 compute-0 podman[88855]: 2025-11-25 09:33:27.048865731 +0000 UTC m=+0.085530642 container died b5e25c0d36882a4cdaf3f28bee15e048800b20925b954f9d383a4990beb05ffe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_knuth, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 09:33:27 compute-0 systemd[1]: libpod-b5e25c0d36882a4cdaf3f28bee15e048800b20925b954f9d383a4990beb05ffe.scope: Deactivated successfully.
Nov 25 09:33:27 compute-0 podman[88844]: 2025-11-25 09:33:26.964409705 +0000 UTC m=+0.022413032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:27 compute-0 podman[88855]: 2025-11-25 09:33:27.063096557 +0000 UTC m=+0.099761428 container remove b5e25c0d36882a4cdaf3f28bee15e048800b20925b954f9d383a4990beb05ffe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_knuth, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 25 09:33:27 compute-0 podman[88855]: 2025-11-25 09:33:26.987493239 +0000 UTC m=+0.024158111 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:27 compute-0 systemd[1]: libpod-conmon-b5e25c0d36882a4cdaf3f28bee15e048800b20925b954f9d383a4990beb05ffe.scope: Deactivated successfully.
Nov 25 09:33:27 compute-0 podman[88918]: 2025-11-25 09:33:27.178860659 +0000 UTC m=+0.027212910 container create 2fb97f64529939a45a252ffd4fe21a6467ac572b2c481e293d7e6b8514234350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 09:33:27 compute-0 systemd[1]: Started libpod-conmon-2fb97f64529939a45a252ffd4fe21a6467ac572b2c481e293d7e6b8514234350.scope.
Nov 25 09:33:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b17f0a06f1c7f66dcc63f7ef52e09244c5730682d7523a3454c8ffaad6e57d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b17f0a06f1c7f66dcc63f7ef52e09244c5730682d7523a3454c8ffaad6e57d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b17f0a06f1c7f66dcc63f7ef52e09244c5730682d7523a3454c8ffaad6e57d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b17f0a06f1c7f66dcc63f7ef52e09244c5730682d7523a3454c8ffaad6e57d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:27 compute-0 podman[88918]: 2025-11-25 09:33:27.233073344 +0000 UTC m=+0.081425614 container init 2fb97f64529939a45a252ffd4fe21a6467ac572b2c481e293d7e6b8514234350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_rosalind, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:33:27 compute-0 podman[88918]: 2025-11-25 09:33:27.239128198 +0000 UTC m=+0.087480458 container start 2fb97f64529939a45a252ffd4fe21a6467ac572b2c481e293d7e6b8514234350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 09:33:27 compute-0 podman[88918]: 2025-11-25 09:33:27.240740727 +0000 UTC m=+0.089092977 container attach 2fb97f64529939a45a252ffd4fe21a6467ac572b2c481e293d7e6b8514234350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_rosalind, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3012149904' entity='client.admin' 
Nov 25 09:33:27 compute-0 ceph-mon[74207]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:27 compute-0 podman[88918]: 2025-11-25 09:33:27.167512277 +0000 UTC m=+0.015864547 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1bbfd20a72a0cb0c0f6ae9e183f6b0d0804fb10e42bb2d6b43b579112f8d2a0-merged.mount: Deactivated successfully.
Nov 25 09:33:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.plffrn/server_addr}] v 0)
Nov 25 09:33:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2675069756' entity='client.admin' 
Nov 25 09:33:27 compute-0 systemd[1]: libpod-9eeae9d6e672961dda5e94a4eb7b3e79a16644aa6c793fd2ea9263849391e46d.scope: Deactivated successfully.
Nov 25 09:33:27 compute-0 podman[88844]: 2025-11-25 09:33:27.347065918 +0000 UTC m=+0.405069224 container died 9eeae9d6e672961dda5e94a4eb7b3e79a16644aa6c793fd2ea9263849391e46d (image=quay.io/ceph/ceph:v19, name=focused_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 25 09:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-18e4c762f3f7431d660c09be8f3fe5fe7cdac3581d20896a146b86f32e51de98-merged.mount: Deactivated successfully.
Nov 25 09:33:27 compute-0 podman[88844]: 2025-11-25 09:33:27.367654909 +0000 UTC m=+0.425658215 container remove 9eeae9d6e672961dda5e94a4eb7b3e79a16644aa6c793fd2ea9263849391e46d (image=quay.io/ceph/ceph:v19, name=focused_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 09:33:27 compute-0 sudo[88814]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:27 compute-0 systemd[1]: libpod-conmon-9eeae9d6e672961dda5e94a4eb7b3e79a16644aa6c793fd2ea9263849391e46d.scope: Deactivated successfully.
Nov 25 09:33:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:27 compute-0 lvm[89018]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:33:27 compute-0 lvm[89018]: VG ceph_vg0 finished
Nov 25 09:33:27 compute-0 fervent_rosalind[88931]: {}
Nov 25 09:33:27 compute-0 lvm[89021]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:33:27 compute-0 lvm[89021]: VG ceph_vg0 finished
Nov 25 09:33:27 compute-0 systemd[1]: libpod-2fb97f64529939a45a252ffd4fe21a6467ac572b2c481e293d7e6b8514234350.scope: Deactivated successfully.
Nov 25 09:33:27 compute-0 podman[88918]: 2025-11-25 09:33:27.754651763 +0000 UTC m=+0.603004023 container died 2fb97f64529939a45a252ffd4fe21a6467ac572b2c481e293d7e6b8514234350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_rosalind, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4b17f0a06f1c7f66dcc63f7ef52e09244c5730682d7523a3454c8ffaad6e57d-merged.mount: Deactivated successfully.
Nov 25 09:33:27 compute-0 podman[88918]: 2025-11-25 09:33:27.776966488 +0000 UTC m=+0.625318748 container remove 2fb97f64529939a45a252ffd4fe21a6467ac572b2c481e293d7e6b8514234350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 25 09:33:27 compute-0 systemd[1]: libpod-conmon-2fb97f64529939a45a252ffd4fe21a6467ac572b2c481e293d7e6b8514234350.scope: Deactivated successfully.
Nov 25 09:33:27 compute-0 sudo[88766]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:27 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 54c1c67b-6299-4c3e-8deb-809cbfcd9603 (Updating rgw.rgw deployment (+3 -> 3))
Nov 25 09:33:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oidoiv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 25 09:33:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oidoiv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:33:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oidoiv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:33:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 25 09:33:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:27 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.oidoiv on compute-2
Nov 25 09:33:27 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.oidoiv on compute-2
Nov 25 09:33:27 compute-0 sudo[89055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ospwmhcuqcvxrmapevzopqnvzmuphsbp ; /usr/bin/python3'
Nov 25 09:33:27 compute-0 sudo[89055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:28 compute-0 python3[89057]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.flybft/server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:28 compute-0 podman[89058]: 2025-11-25 09:33:28.054121714 +0000 UTC m=+0.026697479 container create be6b0fa8e572be150e383e13268d250cbe170186cb3a91d71be971deb5856041 (image=quay.io/ceph/ceph:v19, name=goofy_bhabha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:28 compute-0 systemd[1]: Started libpod-conmon-be6b0fa8e572be150e383e13268d250cbe170186cb3a91d71be971deb5856041.scope.
Nov 25 09:33:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757c6d0112260fc0fe5511dd9487b96f04774d11bda6ed744bf04ad29b23667b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757c6d0112260fc0fe5511dd9487b96f04774d11bda6ed744bf04ad29b23667b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757c6d0112260fc0fe5511dd9487b96f04774d11bda6ed744bf04ad29b23667b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:28 compute-0 podman[89058]: 2025-11-25 09:33:28.119645551 +0000 UTC m=+0.092221315 container init be6b0fa8e572be150e383e13268d250cbe170186cb3a91d71be971deb5856041 (image=quay.io/ceph/ceph:v19, name=goofy_bhabha, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 09:33:28 compute-0 podman[89058]: 2025-11-25 09:33:28.124301669 +0000 UTC m=+0.096877423 container start be6b0fa8e572be150e383e13268d250cbe170186cb3a91d71be971deb5856041 (image=quay.io/ceph/ceph:v19, name=goofy_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:33:28 compute-0 podman[89058]: 2025-11-25 09:33:28.12527906 +0000 UTC m=+0.097854824 container attach be6b0fa8e572be150e383e13268d250cbe170186cb3a91d71be971deb5856041 (image=quay.io/ceph/ceph:v19, name=goofy_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:28 compute-0 podman[89058]: 2025-11-25 09:33:28.043442804 +0000 UTC m=+0.016018588 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2675069756' entity='client.admin' 
Nov 25 09:33:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oidoiv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:33:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oidoiv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:33:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:28 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:28 compute-0 ceph-mon[74207]: Deploying daemon rgw.rgw.compute-2.oidoiv on compute-2
Nov 25 09:33:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.flybft/server_addr}] v 0)
Nov 25 09:33:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1272850759' entity='client.admin' 
Nov 25 09:33:28 compute-0 systemd[1]: libpod-be6b0fa8e572be150e383e13268d250cbe170186cb3a91d71be971deb5856041.scope: Deactivated successfully.
Nov 25 09:33:28 compute-0 podman[89058]: 2025-11-25 09:33:28.405167889 +0000 UTC m=+0.377743663 container died be6b0fa8e572be150e383e13268d250cbe170186cb3a91d71be971deb5856041 (image=quay.io/ceph/ceph:v19, name=goofy_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 25 09:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-757c6d0112260fc0fe5511dd9487b96f04774d11bda6ed744bf04ad29b23667b-merged.mount: Deactivated successfully.
Nov 25 09:33:28 compute-0 podman[89058]: 2025-11-25 09:33:28.423860877 +0000 UTC m=+0.396436641 container remove be6b0fa8e572be150e383e13268d250cbe170186cb3a91d71be971deb5856041 (image=quay.io/ceph/ceph:v19, name=goofy_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:33:28 compute-0 systemd[1]: libpod-conmon-be6b0fa8e572be150e383e13268d250cbe170186cb3a91d71be971deb5856041.scope: Deactivated successfully.
Nov 25 09:33:28 compute-0 sudo[89055]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:28 compute-0 sudo[89129]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxtgjaexmplaumgzcqjawnlwyianrszz ; /usr/bin/python3'
Nov 25 09:33:28 compute-0 sudo[89129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:28 compute-0 python3[89131]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:28 compute-0 podman[89132]: 2025-11-25 09:33:28.697242609 +0000 UTC m=+0.026992733 container create d65961a9bc24627e88b867c4adab8f51b4d119bf8197814630294284741e0141 (image=quay.io/ceph/ceph:v19, name=clever_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 09:33:28 compute-0 systemd[1]: Started libpod-conmon-d65961a9bc24627e88b867c4adab8f51b4d119bf8197814630294284741e0141.scope.
Nov 25 09:33:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e2765ef958cc81e6eb12c8936fa83a369645bcaea2758cb81e28201d39acfc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e2765ef958cc81e6eb12c8936fa83a369645bcaea2758cb81e28201d39acfc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e2765ef958cc81e6eb12c8936fa83a369645bcaea2758cb81e28201d39acfc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:28 compute-0 podman[89132]: 2025-11-25 09:33:28.753994211 +0000 UTC m=+0.083744325 container init d65961a9bc24627e88b867c4adab8f51b4d119bf8197814630294284741e0141 (image=quay.io/ceph/ceph:v19, name=clever_jennings, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:28 compute-0 podman[89132]: 2025-11-25 09:33:28.758217923 +0000 UTC m=+0.087968036 container start d65961a9bc24627e88b867c4adab8f51b4d119bf8197814630294284741e0141 (image=quay.io/ceph/ceph:v19, name=clever_jennings, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:28 compute-0 podman[89132]: 2025-11-25 09:33:28.760679643 +0000 UTC m=+0.090429756 container attach d65961a9bc24627e88b867c4adab8f51b4d119bf8197814630294284741e0141 (image=quay.io/ceph/ceph:v19, name=clever_jennings, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:28 compute-0 podman[89132]: 2025-11-25 09:33:28.686207167 +0000 UTC m=+0.015957302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:28 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 25 09:33:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.lyczeh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 25 09:33:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.lyczeh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:33:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.lyczeh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:33:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 25 09:33:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:28 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:28 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.lyczeh on compute-1
Nov 25 09:33:28 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.lyczeh on compute-1
Nov 25 09:33:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Nov 25 09:33:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4100665242' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 25 09:33:29 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1272850759' entity='client.admin' 
Nov 25 09:33:29 compute-0 ceph-mon[74207]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:29 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:29 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:29 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:29 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.lyczeh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:33:29 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.lyczeh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:33:29 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:29 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:29 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4100665242' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 25 09:33:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 25 09:33:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 25 09:33:29 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 25 09:33:29 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 27 pg[8.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:33:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Nov 25 09:33:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 25 09:33:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4100665242' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 25 09:33:29 compute-0 clever_jennings[89144]: module 'dashboard' is already disabled
Nov 25 09:33:29 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.zcfgby(active, since 117s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:29 compute-0 systemd[1]: libpod-d65961a9bc24627e88b867c4adab8f51b4d119bf8197814630294284741e0141.scope: Deactivated successfully.
Nov 25 09:33:29 compute-0 conmon[89144]: conmon d65961a9bc24627e88b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d65961a9bc24627e88b867c4adab8f51b4d119bf8197814630294284741e0141.scope/container/memory.events
Nov 25 09:33:29 compute-0 podman[89132]: 2025-11-25 09:33:29.971021697 +0000 UTC m=+1.300771811 container died d65961a9bc24627e88b867c4adab8f51b4d119bf8197814630294284741e0141 (image=quay.io/ceph/ceph:v19, name=clever_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-94e2765ef958cc81e6eb12c8936fa83a369645bcaea2758cb81e28201d39acfc-merged.mount: Deactivated successfully.
Nov 25 09:33:29 compute-0 podman[89132]: 2025-11-25 09:33:29.989470444 +0000 UTC m=+1.319220558 container remove d65961a9bc24627e88b867c4adab8f51b4d119bf8197814630294284741e0141 (image=quay.io/ceph/ceph:v19, name=clever_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 25 09:33:29 compute-0 systemd[1]: libpod-conmon-d65961a9bc24627e88b867c4adab8f51b4d119bf8197814630294284741e0141.scope: Deactivated successfully.
Nov 25 09:33:30 compute-0 sudo[89129]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 25 09:33:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.uosdwi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 25 09:33:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.uosdwi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:33:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.uosdwi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:33:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 25 09:33:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:30 compute-0 sudo[89203]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvquugebqfligsunjidrkcieezyiupkk ; /usr/bin/python3'
Nov 25 09:33:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:30 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:30 compute-0 sudo[89203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:30 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.uosdwi on compute-0
Nov 25 09:33:30 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.uosdwi on compute-0
Nov 25 09:33:30 compute-0 sudo[89206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:30 compute-0 sudo[89206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:30 compute-0 sudo[89206]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:30 compute-0 sudo[89231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:30 compute-0 sudo[89231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:30 compute-0 python3[89205]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:30 compute-0 podman[89256]: 2025-11-25 09:33:30.261803029 +0000 UTC m=+0.028202224 container create 4821561860e5e035e911f06dde66f33ac4847b164bb83b84f38fa1c1dfc875cf (image=quay.io/ceph/ceph:v19, name=mystifying_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 25 09:33:30 compute-0 systemd[1]: Started libpod-conmon-4821561860e5e035e911f06dde66f33ac4847b164bb83b84f38fa1c1dfc875cf.scope.
Nov 25 09:33:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab015ff32a3f472262c8af58f7c11be54db3407e97bb44d3a9ee482c48bd860/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab015ff32a3f472262c8af58f7c11be54db3407e97bb44d3a9ee482c48bd860/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab015ff32a3f472262c8af58f7c11be54db3407e97bb44d3a9ee482c48bd860/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:30 compute-0 podman[89256]: 2025-11-25 09:33:30.308244706 +0000 UTC m=+0.074643910 container init 4821561860e5e035e911f06dde66f33ac4847b164bb83b84f38fa1c1dfc875cf (image=quay.io/ceph/ceph:v19, name=mystifying_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:30 compute-0 podman[89256]: 2025-11-25 09:33:30.311957604 +0000 UTC m=+0.078356809 container start 4821561860e5e035e911f06dde66f33ac4847b164bb83b84f38fa1c1dfc875cf (image=quay.io/ceph/ceph:v19, name=mystifying_moore, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:33:30 compute-0 podman[89256]: 2025-11-25 09:33:30.324908357 +0000 UTC m=+0.091307582 container attach 4821561860e5e035e911f06dde66f33ac4847b164bb83b84f38fa1c1dfc875cf (image=quay.io/ceph/ceph:v19, name=mystifying_moore, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:30 compute-0 podman[89256]: 2025-11-25 09:33:30.249652725 +0000 UTC m=+0.016051951 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:30 compute-0 ceph-mon[74207]: Deploying daemon rgw.rgw.compute-1.lyczeh on compute-1
Nov 25 09:33:30 compute-0 ceph-mon[74207]: osdmap e27: 3 total, 3 up, 3 in
Nov 25 09:33:30 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 25 09:33:30 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/90661545' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 25 09:33:30 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4100665242' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 25 09:33:30 compute-0 ceph-mon[74207]: mgrmap e11: compute-0.zcfgby(active, since 117s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.uosdwi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:33:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.uosdwi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:33:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:30 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:30 compute-0 podman[89325]: 2025-11-25 09:33:30.499245734 +0000 UTC m=+0.028424712 container create 110822c87caaea33498da82d4764974eef0ec8efc8f2b597e47e8ab633bb5183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_raman, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:30 compute-0 systemd[1]: Started libpod-conmon-110822c87caaea33498da82d4764974eef0ec8efc8f2b597e47e8ab633bb5183.scope.
Nov 25 09:33:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:30 compute-0 podman[89325]: 2025-11-25 09:33:30.553025522 +0000 UTC m=+0.082204501 container init 110822c87caaea33498da82d4764974eef0ec8efc8f2b597e47e8ab633bb5183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:30 compute-0 podman[89325]: 2025-11-25 09:33:30.556576116 +0000 UTC m=+0.085755095 container start 110822c87caaea33498da82d4764974eef0ec8efc8f2b597e47e8ab633bb5183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:30 compute-0 podman[89325]: 2025-11-25 09:33:30.557668536 +0000 UTC m=+0.086847524 container attach 110822c87caaea33498da82d4764974eef0ec8efc8f2b597e47e8ab633bb5183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 25 09:33:30 compute-0 condescending_raman[89339]: 167 167
Nov 25 09:33:30 compute-0 systemd[1]: libpod-110822c87caaea33498da82d4764974eef0ec8efc8f2b597e47e8ab633bb5183.scope: Deactivated successfully.
Nov 25 09:33:30 compute-0 podman[89325]: 2025-11-25 09:33:30.559932874 +0000 UTC m=+0.089111872 container died 110822c87caaea33498da82d4764974eef0ec8efc8f2b597e47e8ab633bb5183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f5c4efc07d8b96e3d4116a7ac0a8ead627920f075e456536d176bb8b1cc629a-merged.mount: Deactivated successfully.
Nov 25 09:33:30 compute-0 podman[89325]: 2025-11-25 09:33:30.574644767 +0000 UTC m=+0.103823744 container remove 110822c87caaea33498da82d4764974eef0ec8efc8f2b597e47e8ab633bb5183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 25 09:33:30 compute-0 podman[89325]: 2025-11-25 09:33:30.486954515 +0000 UTC m=+0.016133513 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:30 compute-0 systemd[1]: libpod-conmon-110822c87caaea33498da82d4764974eef0ec8efc8f2b597e47e8ab633bb5183.scope: Deactivated successfully.
Nov 25 09:33:30 compute-0 systemd[1]: Reloading.
Nov 25 09:33:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Nov 25 09:33:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1685845904' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 25 09:33:30 compute-0 systemd-sysv-generator[89379]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:33:30 compute-0 systemd-rc-local-generator[89376]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:33:30 compute-0 systemd[1]: Reloading.
Nov 25 09:33:30 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v79: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:30 compute-0 systemd-rc-local-generator[89415]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:33:30 compute-0 systemd-sysv-generator[89419]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:33:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 25 09:33:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 25 09:33:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 25 09:33:30 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 25 09:33:30 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 28 pg[8.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:33:31 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.uosdwi for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:33:31 compute-0 podman[89475]: 2025-11-25 09:33:31.189335299 +0000 UTC m=+0.026351615 container create 3d928580d6f307fc9e4777727cc4ff7090fb0c9ed115cfa6be9dc09baba11d39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-rgw-rgw-compute-0-uosdwi, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a66fab44d55ac42248a905213f9ffe266c6af821b02544a5e00678f0370bec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a66fab44d55ac42248a905213f9ffe266c6af821b02544a5e00678f0370bec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a66fab44d55ac42248a905213f9ffe266c6af821b02544a5e00678f0370bec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a66fab44d55ac42248a905213f9ffe266c6af821b02544a5e00678f0370bec/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.uosdwi supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:31 compute-0 podman[89475]: 2025-11-25 09:33:31.233301388 +0000 UTC m=+0.070317724 container init 3d928580d6f307fc9e4777727cc4ff7090fb0c9ed115cfa6be9dc09baba11d39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-rgw-rgw-compute-0-uosdwi, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:31 compute-0 podman[89475]: 2025-11-25 09:33:31.236876707 +0000 UTC m=+0.073893024 container start 3d928580d6f307fc9e4777727cc4ff7090fb0c9ed115cfa6be9dc09baba11d39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-rgw-rgw-compute-0-uosdwi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 25 09:33:31 compute-0 bash[89475]: 3d928580d6f307fc9e4777727cc4ff7090fb0c9ed115cfa6be9dc09baba11d39
Nov 25 09:33:31 compute-0 podman[89475]: 2025-11-25 09:33:31.178463986 +0000 UTC m=+0.015480323 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:31 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.uosdwi for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:33:31 compute-0 sudo[89231]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:31 compute-0 radosgw[89491]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:33:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:31 compute-0 radosgw[89491]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Nov 25 09:33:31 compute-0 radosgw[89491]: framework: beast
Nov 25 09:33:31 compute-0 radosgw[89491]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 25 09:33:31 compute-0 radosgw[89491]: init_numa not setting numa affinity
Nov 25 09:33:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 25 09:33:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev 54c1c67b-6299-4c3e-8deb-809cbfcd9603 (Updating rgw.rgw deployment (+3 -> 3))
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 54c1c67b-6299-4c3e-8deb-809cbfcd9603 (Updating rgw.rgw deployment (+3 -> 3)) in 3 seconds
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 25 09:33:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 25 09:33:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 5c0b2468-2fb9-441e-9c26-9275ea82a51d (Updating node-exporter deployment (+3 -> 3))
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Nov 25 09:33:31 compute-0 sudo[89761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:31 compute-0 sudo[89761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:31 compute-0 sudo[89761]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:31 compute-0 sudo[90105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:31 compute-0 sudo[90105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:31 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:33:31 compute-0 ceph-mon[74207]: Deploying daemon rgw.rgw.compute-0.uosdwi on compute-0
Nov 25 09:33:31 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1685845904' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 25 09:33:31 compute-0 ceph-mon[74207]: pgmap v79: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:31 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 25 09:33:31 compute-0 ceph-mon[74207]: osdmap e28: 3 total, 3 up, 3 in
Nov 25 09:33:31 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-0 ceph-mon[74207]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1685845904' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 25 09:33:31 compute-0 systemd[1]: libpod-4821561860e5e035e911f06dde66f33ac4847b164bb83b84f38fa1c1dfc875cf.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 podman[89256]: 2025-11-25 09:33:31.432290826 +0000 UTC m=+1.198690031 container died 4821561860e5e035e911f06dde66f33ac4847b164bb83b84f38fa1c1dfc875cf (image=quay.io/ceph/ceph:v19, name=mystifying_moore, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:31 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.zcfgby(active, since 118s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fab015ff32a3f472262c8af58f7c11be54db3407e97bb44d3a9ee482c48bd860-merged.mount: Deactivated successfully.
Nov 25 09:33:31 compute-0 podman[89256]: 2025-11-25 09:33:31.462772255 +0000 UTC m=+1.229171460 container remove 4821561860e5e035e911f06dde66f33ac4847b164bb83b84f38fa1c1dfc875cf (image=quay.io/ceph/ceph:v19, name=mystifying_moore, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:33:31 compute-0 systemd[1]: libpod-conmon-4821561860e5e035e911f06dde66f33ac4847b164bb83b84f38fa1c1dfc875cf.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 sudo[89203]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:31 compute-0 sshd-session[75755]: Connection closed by 192.168.122.100 port 54902
Nov 25 09:33:31 compute-0 sshd-session[75699]: Connection closed by 192.168.122.100 port 54880
Nov 25 09:33:31 compute-0 sshd-session[75726]: Connection closed by 192.168.122.100 port 54890
Nov 25 09:33:31 compute-0 sshd-session[75554]: Connection closed by 192.168.122.100 port 54828
Nov 25 09:33:31 compute-0 sshd-session[75612]: Connection closed by 192.168.122.100 port 54848
Nov 25 09:33:31 compute-0 sshd-session[75670]: Connection closed by 192.168.122.100 port 54878
Nov 25 09:33:31 compute-0 sshd-session[75641]: Connection closed by 192.168.122.100 port 54864
Nov 25 09:33:31 compute-0 sshd-session[75583]: Connection closed by 192.168.122.100 port 54842
Nov 25 09:33:31 compute-0 sshd-session[75467]: Connection closed by 192.168.122.100 port 54796
Nov 25 09:33:31 compute-0 sshd-session[75525]: Connection closed by 192.168.122.100 port 54820
Nov 25 09:33:31 compute-0 sshd-session[75496]: Connection closed by 192.168.122.100 port 54804
Nov 25 09:33:31 compute-0 sshd-session[75465]: Connection closed by 192.168.122.100 port 54780
Nov 25 09:33:31 compute-0 sshd-session[75609]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-0 sshd-session[75752]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-0 sshd-session[75463]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-0 sshd-session[75522]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-0 sshd-session[75696]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-0 sshd-session[75723]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-0 sshd-session[75638]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-0 sshd-session[75443]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-0 sshd-session[75551]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-0 sshd-session[75493]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-0 sshd-session[75580]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-0 sshd-session[75667]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:31 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Session 33 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Session 23 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Session 24 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Session 26 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Session 21 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Session 25 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Session 30 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Session 28 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Session 32 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Session 29 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Session 27 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Session 31 logged out. Waiting for processes to exit.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Removed session 23.
Nov 25 09:33:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ignoring --setuser ceph since I am not root
Nov 25 09:33:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ignoring --setgroup ceph since I am not root
Nov 25 09:33:31 compute-0 systemd-logind[744]: Removed session 24.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Removed session 26.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Removed session 21.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Removed session 25.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Removed session 30.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Removed session 28.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Removed session 32.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Removed session 29.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Removed session 31.
Nov 25 09:33:31 compute-0 systemd-logind[744]: Removed session 27.
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: pidfile_write: ignore empty --pid-file
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'alerts'
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'balancer'
Nov 25 09:33:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:31.638+0000 7fc21a5b3140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:31 compute-0 sudo[90214]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dehvmvetpdehlrvzkihuovjpxdwqvliv ; /usr/bin/python3'
Nov 25 09:33:31 compute-0 sudo[90214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:31 compute-0 systemd[1]: Reloading.
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:31.708+0000 7fc21a5b3140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:31 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'cephadm'
Nov 25 09:33:31 compute-0 systemd-rc-local-generator[90237]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:33:31 compute-0 systemd-sysv-generator[90245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:33:31 compute-0 python3[90219]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:31 compute-0 podman[90254]: 2025-11-25 09:33:31.848129369 +0000 UTC m=+0.027015437 container create cc9d7d42944d5206499e26d5d78575d252de33c485eb3ffcab83048518ac11c7 (image=quay.io/ceph/ceph:v19, name=frosty_merkle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:31 compute-0 systemd[1]: Started libpod-conmon-cc9d7d42944d5206499e26d5d78575d252de33c485eb3ffcab83048518ac11c7.scope.
Nov 25 09:33:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:31 compute-0 podman[90254]: 2025-11-25 09:33:31.837390215 +0000 UTC m=+0.016276303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17250a6df4c33cf0df5cce9694fa81592dd5169d39b5d7173aeb3c97601ada2b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17250a6df4c33cf0df5cce9694fa81592dd5169d39b5d7173aeb3c97601ada2b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17250a6df4c33cf0df5cce9694fa81592dd5169d39b5d7173aeb3c97601ada2b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:31 compute-0 podman[90254]: 2025-11-25 09:33:31.942839323 +0000 UTC m=+0.121725401 container init cc9d7d42944d5206499e26d5d78575d252de33c485eb3ffcab83048518ac11c7 (image=quay.io/ceph/ceph:v19, name=frosty_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 25 09:33:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 25 09:33:31 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 25 09:33:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 25 09:33:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 25 09:33:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 25 09:33:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:31 compute-0 podman[90254]: 2025-11-25 09:33:31.954958618 +0000 UTC m=+0.133844687 container start cc9d7d42944d5206499e26d5d78575d252de33c485eb3ffcab83048518ac11c7 (image=quay.io/ceph/ceph:v19, name=frosty_merkle, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:33:31 compute-0 systemd[1]: Reloading.
Nov 25 09:33:31 compute-0 podman[90254]: 2025-11-25 09:33:31.967920353 +0000 UTC m=+0.146806431 container attach cc9d7d42944d5206499e26d5d78575d252de33c485eb3ffcab83048518ac11c7 (image=quay.io/ceph/ceph:v19, name=frosty_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:33:32 compute-0 systemd-rc-local-generator[90294]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:33:32 compute-0 systemd-sysv-generator[90300]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:33:32 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 29 pg[9.0( empty local-lis/les=0/0 n=0 ec=29/29 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:33:32 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:33:32 compute-0 bash[90381]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Nov 25 09:33:32 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'crash'
Nov 25 09:33:32 compute-0 ceph-mon[74207]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:33:32 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1685845904' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 25 09:33:32 compute-0 ceph-mon[74207]: mgrmap e12: compute-0.zcfgby(active, since 118s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:32 compute-0 ceph-mon[74207]: osdmap e29: 3 total, 3 up, 3 in
Nov 25 09:33:32 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:32 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:32 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:32 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:32 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 09:33:32 compute-0 ceph-mgr[74476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:32 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'dashboard'
Nov 25 09:33:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:32.424+0000 7fc21a5b3140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:32 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'devicehealth'
Nov 25 09:33:32 compute-0 bash[90381]: Getting image source signatures
Nov 25 09:33:32 compute-0 bash[90381]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Nov 25 09:33:32 compute-0 bash[90381]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Nov 25 09:33:32 compute-0 bash[90381]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Nov 25 09:33:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 25 09:33:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 09:33:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 09:33:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 09:33:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 25 09:33:32 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 25 09:33:32 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 30 pg[9.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:33:32 compute-0 ceph-mgr[74476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:32.965+0000 7fc21a5b3140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:32 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   from numpy import show_config as show_numpy_config
Nov 25 09:33:33 compute-0 ceph-mgr[74476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:33.107+0000 7fc21a5b3140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:33 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'influx'
Nov 25 09:33:33 compute-0 ceph-mgr[74476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:33.170+0000 7fc21a5b3140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:33 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'insights'
Nov 25 09:33:33 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'iostat'
Nov 25 09:33:33 compute-0 ceph-mgr[74476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:33.286+0000 7fc21a5b3140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:33 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'k8sevents'
Nov 25 09:33:33 compute-0 bash[90381]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Nov 25 09:33:33 compute-0 bash[90381]: Writing manifest to image destination
Nov 25 09:33:33 compute-0 podman[90381]: 2025-11-25 09:33:33.428612454 +0000 UTC m=+1.134219300 container create dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e63e7bfdb75029d94653e54e41462db524d74b47fe824701e94ac56e7401648/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:33 compute-0 podman[90381]: 2025-11-25 09:33:33.462042381 +0000 UTC m=+1.167649246 container init dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:33 compute-0 podman[90381]: 2025-11-25 09:33:33.465564542 +0000 UTC m=+1.171171386 container start dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:33 compute-0 bash[90381]: dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15
Nov 25 09:33:33 compute-0 podman[90381]: 2025-11-25 09:33:33.418805658 +0000 UTC m=+1.124412522 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.470Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.470Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 25 09:33:33 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.473Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.473Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.473Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.473Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=arp
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=bcache
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=bonding
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=cpu
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=dmi
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=edac
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=entropy
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=filefd
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=netclass
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=netdev
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=netstat
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=nfs
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=nvme
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=os
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=pressure
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=rapl
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=selinux
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=softnet
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=stat
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=textfile
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=time
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=uname
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=xfs
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=node_exporter.go:117 level=info collector=zfs
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 25 09:33:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[90450]: ts=2025-11-25T09:33:33.474Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 25 09:33:33 compute-0 sudo[90105]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:33 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Nov 25 09:33:33 compute-0 systemd[1]: session-33.scope: Consumed 20.301s CPU time.
Nov 25 09:33:33 compute-0 systemd-logind[744]: Removed session 33.
Nov 25 09:33:33 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'localpool'
Nov 25 09:33:33 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 09:33:33 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'mirroring'
Nov 25 09:33:33 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'nfs'
Nov 25 09:33:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 25 09:33:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 25 09:33:33 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 25 09:33:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 25 09:33:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:33 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 09:33:33 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 09:33:33 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 09:33:33 compute-0 ceph-mon[74207]: osdmap e30: 3 total, 3 up, 3 in
Nov 25 09:33:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 25 09:33:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 25 09:33:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'orchestrator'
Nov 25 09:33:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:34.113+0000 7fc21a5b3140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 09:33:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:34.292+0000 7fc21a5b3140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'osd_support'
Nov 25 09:33:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:34.358+0000 7fc21a5b3140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 09:33:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:34.414+0000 7fc21a5b3140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'progress'
Nov 25 09:33:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:34.478+0000 7fc21a5b3140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'prometheus'
Nov 25 09:33:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:34.537+0000 7fc21a5b3140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rbd_support'
Nov 25 09:33:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:34.817+0000 7fc21a5b3140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'restful'
Nov 25 09:33:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:34.894+0000 7fc21a5b3140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 25 09:33:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 09:33:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 09:33:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 09:33:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 25 09:33:34 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 25 09:33:34 compute-0 ceph-mon[74207]: osdmap e31: 3 total, 3 up, 3 in
Nov 25 09:33:34 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:34 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:34 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:34 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:34 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 09:33:34 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 09:33:34 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 09:33:34 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 09:33:34 compute-0 ceph-mon[74207]: osdmap e32: 3 total, 3 up, 3 in
Nov 25 09:33:35 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rgw'
Nov 25 09:33:35 compute-0 ceph-mgr[74476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rook'
Nov 25 09:33:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:35.281+0000 7fc21a5b3140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-0 ceph-mgr[74476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'selftest'
Nov 25 09:33:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:35.730+0000 7fc21a5b3140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-0 ceph-mgr[74476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'snap_schedule'
Nov 25 09:33:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:35.790+0000 7fc21a5b3140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-0 ceph-mgr[74476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'stats'
Nov 25 09:33:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:35.855+0000 7fc21a5b3140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'status'
Nov 25 09:33:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 25 09:33:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 25 09:33:35 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 25 09:33:35 compute-0 ceph-mgr[74476]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'telegraf'
Nov 25 09:33:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:35.976+0000 7fc21a5b3140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 25 09:33:35 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 25 09:33:35 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 25 09:33:35 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 33 pg[11.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'telemetry'
Nov 25 09:33:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:36.037+0000 7fc21a5b3140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 09:33:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:36.164+0000 7fc21a5b3140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'volumes'
Nov 25 09:33:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:36.345+0000 7fc21a5b3140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:36.558+0000 7fc21a5b3140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'zabbix'
Nov 25 09:33:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:36.615+0000 7fc21a5b3140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Active manager daemon compute-0.zcfgby restarted
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zcfgby
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: ms_deliver_dispatch: unhandled message 0x561d07de7860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.zcfgby(active, starting, since 0.0171463s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr handle_mgr_map Activating!
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr handle_mgr_map I am now activating
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 34 pg[11.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: balancer
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Manager daemon compute-0.zcfgby is now available
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [balancer INFO root] Starting
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:33:36
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: cephadm
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: crash
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: dashboard
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO access_control] Loading user roles DB version=2
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO sso] Loading SSO DB version=1
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: devicehealth
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [devicehealth INFO root] Starting
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: iostat
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: nfs
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: orchestrator
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: pg_autoscaler
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: progress
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [progress INFO root] Loading...
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fc1c0162760>, <progress.module.GhostEvent object at 0x7fc1c0162910>, <progress.module.GhostEvent object at 0x7fc1c0162940>, <progress.module.GhostEvent object at 0x7fc1c0162970>, <progress.module.GhostEvent object at 0x7fc1c01629a0>] historic events
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.flybft restarted
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.flybft started
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] recovery thread starting
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] starting setup
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.plffrn restarted
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.plffrn started
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: rbd_support
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: restful
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: status
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: telemetry
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [restful WARNING root] server not running: no certificate configured
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] PerfHandler: starting
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TaskHandler: starting
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"} v 0)
Nov 25 09:33:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [rbd_support INFO root] setup complete
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: volumes
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Nov 25 09:33:36 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Nov 25 09:33:36 compute-0 ceph-mon[74207]: osdmap e33: 3 total, 3 up, 3 in
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: Active manager daemon compute-0.zcfgby restarted
Nov 25 09:33:36 compute-0 ceph-mon[74207]: Activating manager daemon compute-0.zcfgby
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 09:33:36 compute-0 ceph-mon[74207]: osdmap e34: 3 total, 3 up, 3 in
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: mgrmap e13: compute-0.zcfgby(active, starting, since 0.0171463s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: Manager daemon compute-0.zcfgby is now available
Nov 25 09:33:36 compute-0 ceph-mon[74207]: Standby manager daemon compute-2.flybft restarted
Nov 25 09:33:36 compute-0 ceph-mon[74207]: Standby manager daemon compute-2.flybft started
Nov 25 09:33:36 compute-0 ceph-mon[74207]: Standby manager daemon compute-1.plffrn restarted
Nov 25 09:33:36 compute-0 ceph-mon[74207]: Standby manager daemon compute-1.plffrn started
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:33:36 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:33:37 compute-0 sshd-session[90575]: Accepted publickey for ceph-admin from 192.168.122.100 port 42080 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:33:37 compute-0 systemd-logind[744]: New session 34 of user ceph-admin.
Nov 25 09:33:37 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Nov 25 09:33:37 compute-0 sshd-session[90575]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:33:37 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.module] Engine started.
Nov 25 09:33:37 compute-0 sudo[90598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:37 compute-0 sudo[90598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:37 compute-0 sudo[90598]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:37 compute-0 sudo[90624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:33:37 compute-0 sudo[90624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:37 compute-0 podman[90704]: 2025-11-25 09:33:37.55617403 +0000 UTC m=+0.041008154 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 25 09:33:37 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.zcfgby(active, since 1.02396s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:37 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.24173 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v3: 11 pgs: 11 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Nov 25 09:33:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 09:33:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 09:33:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 09:33:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 25 09:33:37 compute-0 podman[90704]: 2025-11-25 09:33:37.650806939 +0000 UTC m=+0.135641063 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Nov 25 09:33:37 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 25 09:33:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:37 compute-0 frosty_merkle[90270]: Option GRAFANA_API_USERNAME updated
Nov 25 09:33:37 compute-0 systemd[1]: libpod-cc9d7d42944d5206499e26d5d78575d252de33c485eb3ffcab83048518ac11c7.scope: Deactivated successfully.
Nov 25 09:33:37 compute-0 podman[90254]: 2025-11-25 09:33:37.684278946 +0000 UTC m=+5.863165014 container died cc9d7d42944d5206499e26d5d78575d252de33c485eb3ffcab83048518ac11c7 (image=quay.io/ceph/ceph:v19, name=frosty_merkle, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-17250a6df4c33cf0df5cce9694fa81592dd5169d39b5d7173aeb3c97601ada2b-merged.mount: Deactivated successfully.
Nov 25 09:33:37 compute-0 systemd[75447]: Starting Mark boot as successful...
Nov 25 09:33:37 compute-0 podman[90254]: 2025-11-25 09:33:37.735881169 +0000 UTC m=+5.914767237 container remove cc9d7d42944d5206499e26d5d78575d252de33c485eb3ffcab83048518ac11c7 (image=quay.io/ceph/ceph:v19, name=frosty_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:33:37 compute-0 systemd[1]: libpod-conmon-cc9d7d42944d5206499e26d5d78575d252de33c485eb3ffcab83048518ac11c7.scope: Deactivated successfully.
Nov 25 09:33:37 compute-0 systemd[75447]: Finished Mark boot as successful.
Nov 25 09:33:37 compute-0 sudo[90214]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:37 compute-0 radosgw[89491]: v1 topic migration: starting v1 topic migration..
Nov 25 09:33:37 compute-0 radosgw[89491]: LDAP not started since no server URIs were provided in the configuration.
Nov 25 09:33:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-rgw-rgw-compute-0-uosdwi[89487]: 2025-11-25T09:33:37.782+0000 7ff28c49c980 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 25 09:33:37 compute-0 radosgw[89491]: v1 topic migration: finished v1 topic migration
Nov 25 09:33:37 compute-0 radosgw[89491]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 25 09:33:37 compute-0 radosgw[89491]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 25 09:33:37 compute-0 radosgw[89491]: framework: beast
Nov 25 09:33:37 compute-0 radosgw[89491]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 25 09:33:37 compute-0 radosgw[89491]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 25 09:33:37 compute-0 radosgw[89491]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 25 09:33:37 compute-0 radosgw[89491]: starting handler: beast
Nov 25 09:33:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:37 compute-0 radosgw[89491]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:33:37 compute-0 radosgw[89491]: mgrc service_daemon_register rgw.14412 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.uosdwi,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865360,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=6af48147-6aba-44e3-91a3-565a32433f82,zone_name=default,zonegroup_id=7f877101-a613-42fa-9374-f143e99606e2,zonegroup_name=default}
Nov 25 09:33:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:37 compute-0 sudo[90848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtselcyqagvwkrooduixtrxgeenjtdub ; /usr/bin/python3'
Nov 25 09:33:37 compute-0 sudo[90848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 python3[90857]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Nov 25 09:33:38 compute-0 podman[90886]: 2025-11-25 09:33:38.105927512 +0000 UTC m=+0.033539847 container create fcb159a65709aca80d6e14b0ea44d794bf951fdc72f113ada8d00e939ba01fcb (image=quay.io/ceph/ceph:v19, name=confident_curran, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:33:38] ENGINE Bus STARTING
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:33:38] ENGINE Bus STARTING
Nov 25 09:33:38 compute-0 systemd[1]: Started libpod-conmon-fcb159a65709aca80d6e14b0ea44d794bf951fdc72f113ada8d00e939ba01fcb.scope.
Nov 25 09:33:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbb6d5c50fb215b567259755d9e66a2584f0a8580c6671afae9026346a8925e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbb6d5c50fb215b567259755d9e66a2584f0a8580c6671afae9026346a8925e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbb6d5c50fb215b567259755d9e66a2584f0a8580c6671afae9026346a8925e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:38 compute-0 podman[90895]: 2025-11-25 09:33:38.160608017 +0000 UTC m=+0.053022030 container exec dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:38 compute-0 podman[90886]: 2025-11-25 09:33:38.163789875 +0000 UTC m=+0.091402220 container init fcb159a65709aca80d6e14b0ea44d794bf951fdc72f113ada8d00e939ba01fcb (image=quay.io/ceph/ceph:v19, name=confident_curran, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:33:38 compute-0 podman[90886]: 2025-11-25 09:33:38.168644518 +0000 UTC m=+0.096256852 container start fcb159a65709aca80d6e14b0ea44d794bf951fdc72f113ada8d00e939ba01fcb (image=quay.io/ceph/ceph:v19, name=confident_curran, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 25 09:33:38 compute-0 podman[90886]: 2025-11-25 09:33:38.170096604 +0000 UTC m=+0.097708939 container attach fcb159a65709aca80d6e14b0ea44d794bf951fdc72f113ada8d00e939ba01fcb (image=quay.io/ceph/ceph:v19, name=confident_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 09:33:38 compute-0 podman[90895]: 2025-11-25 09:33:38.170281392 +0000 UTC m=+0.062695405 container exec_died dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:38 compute-0 podman[90886]: 2025-11-25 09:33:38.089100301 +0000 UTC m=+0.016712656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:38 compute-0 sudo[90624]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:33:38] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:33:38] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:33:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 sudo[90947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:38 compute-0 sudo[90947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:38 compute-0 sudo[90947]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:38 compute-0 sudo[91003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:33:38 compute-0 sudo[91003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:33:38] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:33:38] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:33:38] ENGINE Bus STARTED
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:33:38] ENGINE Bus STARTED
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:33:38] ENGINE Client ('192.168.122.100', 57652) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:33:38] ENGINE Client ('192.168.122.100', 57652) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14454 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Nov 25 09:33:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 confident_curran[90920]: Option GRAFANA_API_PASSWORD updated
Nov 25 09:33:38 compute-0 systemd[1]: libpod-fcb159a65709aca80d6e14b0ea44d794bf951fdc72f113ada8d00e939ba01fcb.scope: Deactivated successfully.
Nov 25 09:33:38 compute-0 podman[90886]: 2025-11-25 09:33:38.49741977 +0000 UTC m=+0.425032106 container died fcb159a65709aca80d6e14b0ea44d794bf951fdc72f113ada8d00e939ba01fcb (image=quay.io/ceph/ceph:v19, name=confident_curran, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Nov 25 09:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fbb6d5c50fb215b567259755d9e66a2584f0a8580c6671afae9026346a8925e-merged.mount: Deactivated successfully.
Nov 25 09:33:38 compute-0 podman[90886]: 2025-11-25 09:33:38.516869333 +0000 UTC m=+0.444481669 container remove fcb159a65709aca80d6e14b0ea44d794bf951fdc72f113ada8d00e939ba01fcb (image=quay.io/ceph/ceph:v19, name=confident_curran, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:33:38 compute-0 systemd[1]: libpod-conmon-fcb159a65709aca80d6e14b0ea44d794bf951fdc72f113ada8d00e939ba01fcb.scope: Deactivated successfully.
Nov 25 09:33:38 compute-0 sudo[90848]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:38 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 09:33:38 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 09:33:38 compute-0 ceph-mon[74207]: mgrmap e14: compute-0.zcfgby(active, since 1.02396s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:38 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 09:33:38 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 09:33:38 compute-0 ceph-mon[74207]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 09:33:38 compute-0 ceph-mon[74207]: osdmap e35: 3 total, 3 up, 3 in
Nov 25 09:33:38 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 ceph-mon[74207]: [25/Nov/2025:09:33:38] ENGINE Bus STARTING
Nov 25 09:33:38 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 ceph-mon[74207]: [25/Nov/2025:09:33:38] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:33:38 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 ceph-mon[74207]: [25/Nov/2025:09:33:38] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:33:38 compute-0 ceph-mon[74207]: [25/Nov/2025:09:33:38] ENGINE Bus STARTED
Nov 25 09:33:38 compute-0 ceph-mon[74207]: [25/Nov/2025:09:33:38] ENGINE Client ('192.168.122.100', 57652) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:33:38 compute-0 ceph-mon[74207]: from='client.14454 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:38 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v5: 11 pgs: 11 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:38 compute-0 sudo[91079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhjgvsjtefyqwjpjicaqswqwlcwsmmmx ; /usr/bin/python3'
Nov 25 09:33:38 compute-0 sudo[91079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:38 compute-0 ceph-mgr[74476]: [devicehealth INFO root] Check health
Nov 25 09:33:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 sudo[91003]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 25 09:33:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 25 09:33:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 25 09:33:38 compute-0 python3[91081]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:38 compute-0 sudo[91105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:38 compute-0 sudo[91105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:38 compute-0 sudo[91105]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:38 compute-0 podman[91129]: 2025-11-25 09:33:38.828099343 +0000 UTC m=+0.025021609 container create ab7f64ff2e4a4081af309b5289919c1002e34f07247248eae0be05237202b432 (image=quay.io/ceph/ceph:v19, name=awesome_hoover, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:38 compute-0 systemd[1]: Started libpod-conmon-ab7f64ff2e4a4081af309b5289919c1002e34f07247248eae0be05237202b432.scope.
Nov 25 09:33:38 compute-0 sudo[91131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 25 09:33:38 compute-0 sudo[91131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29805c973f9cef6a2aae9c2136e2ca8826e6c396fcbe70d6758f0a4c386f49f9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29805c973f9cef6a2aae9c2136e2ca8826e6c396fcbe70d6758f0a4c386f49f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29805c973f9cef6a2aae9c2136e2ca8826e6c396fcbe70d6758f0a4c386f49f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:38 compute-0 podman[91129]: 2025-11-25 09:33:38.887201544 +0000 UTC m=+0.084123820 container init ab7f64ff2e4a4081af309b5289919c1002e34f07247248eae0be05237202b432 (image=quay.io/ceph/ceph:v19, name=awesome_hoover, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:33:38 compute-0 podman[91129]: 2025-11-25 09:33:38.897790626 +0000 UTC m=+0.094712892 container start ab7f64ff2e4a4081af309b5289919c1002e34f07247248eae0be05237202b432 (image=quay.io/ceph/ceph:v19, name=awesome_hoover, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 09:33:38 compute-0 podman[91129]: 2025-11-25 09:33:38.900916969 +0000 UTC m=+0.097839235 container attach ab7f64ff2e4a4081af309b5289919c1002e34f07247248eae0be05237202b432 (image=quay.io/ceph/ceph:v19, name=awesome_hoover, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:38 compute-0 podman[91129]: 2025-11-25 09:33:38.818206444 +0000 UTC m=+0.015128730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:39 compute-0 sudo[91131]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 25 09:33:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:39 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:33:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:33:39 compute-0 sudo[91206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:33:39 compute-0 sudo[91206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91206]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14466 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Nov 25 09:33:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-0 awesome_hoover[91166]: Option ALERTMANAGER_API_HOST updated
Nov 25 09:33:39 compute-0 sudo[91231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:33:39 compute-0 sudo[91231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91231]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 podman[91129]: 2025-11-25 09:33:39.177989399 +0000 UTC m=+0.374911665 container died ab7f64ff2e4a4081af309b5289919c1002e34f07247248eae0be05237202b432 (image=quay.io/ceph/ceph:v19, name=awesome_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 25 09:33:39 compute-0 systemd[1]: libpod-ab7f64ff2e4a4081af309b5289919c1002e34f07247248eae0be05237202b432.scope: Deactivated successfully.
Nov 25 09:33:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-29805c973f9cef6a2aae9c2136e2ca8826e6c396fcbe70d6758f0a4c386f49f9-merged.mount: Deactivated successfully.
Nov 25 09:33:39 compute-0 podman[91129]: 2025-11-25 09:33:39.201865847 +0000 UTC m=+0.398788113 container remove ab7f64ff2e4a4081af309b5289919c1002e34f07247248eae0be05237202b432 (image=quay.io/ceph/ceph:v19, name=awesome_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:33:39 compute-0 systemd[1]: libpod-conmon-ab7f64ff2e4a4081af309b5289919c1002e34f07247248eae0be05237202b432.scope: Deactivated successfully.
Nov 25 09:33:39 compute-0 sudo[91079]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 sudo[91260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:39 compute-0 sudo[91260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91260]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 sudo[91292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:39 compute-0 sudo[91292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91292]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 sudo[91317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:39 compute-0 sudo[91317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91317]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 sudo[91363]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksxqsgytapqxutvsnwvkfskodirmpozq ; /usr/bin/python3'
Nov 25 09:33:39 compute-0 sudo[91363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:39 compute-0 sudo[91391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:39 compute-0 sudo[91391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91391]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 python3[91367]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:39 compute-0 sudo[91416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:39 compute-0 sudo[91416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91416]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.zcfgby(active, since 2s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:39 compute-0 podman[91439]: 2025-11-25 09:33:39.480654672 +0000 UTC m=+0.043353356 container create 71677b2670476928f542037daf416e04a86a1e93ff2a47479668bd523bf7add4 (image=quay.io/ceph/ceph:v19, name=wizardly_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-0 systemd[1]: Started libpod-conmon-71677b2670476928f542037daf416e04a86a1e93ff2a47479668bd523bf7add4.scope.
Nov 25 09:33:39 compute-0 sudo[91448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 25 09:33:39 compute-0 sudo[91448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91448]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee7bbb2420be6621b0580e54f40c61c8f98d5715bb59a3d8d19758c4f8a2dce/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee7bbb2420be6621b0580e54f40c61c8f98d5715bb59a3d8d19758c4f8a2dce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee7bbb2420be6621b0580e54f40c61c8f98d5715bb59a3d8d19758c4f8a2dce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-0 podman[91439]: 2025-11-25 09:33:39.530027023 +0000 UTC m=+0.092725716 container init 71677b2670476928f542037daf416e04a86a1e93ff2a47479668bd523bf7add4 (image=quay.io/ceph/ceph:v19, name=wizardly_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:33:39 compute-0 podman[91439]: 2025-11-25 09:33:39.536647673 +0000 UTC m=+0.099346356 container start 71677b2670476928f542037daf416e04a86a1e93ff2a47479668bd523bf7add4 (image=quay.io/ceph/ceph:v19, name=wizardly_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:33:39 compute-0 podman[91439]: 2025-11-25 09:33:39.537853466 +0000 UTC m=+0.100552149 container attach 71677b2670476928f542037daf416e04a86a1e93ff2a47479668bd523bf7add4 (image=quay.io/ceph/ceph:v19, name=wizardly_bouman, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:39 compute-0 sudo[91481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:39 compute-0 sudo[91481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91481]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 podman[91439]: 2025-11-25 09:33:39.458980526 +0000 UTC m=+0.021679229 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:39 compute-0 sudo[91507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:39 compute-0 sudo[91507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91507]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 sudo[91532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:39 compute-0 sudo[91532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91532]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 ceph-mon[74207]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 09:33:39 compute-0 ceph-mon[74207]: Cluster is now healthy
Nov 25 09:33:39 compute-0 ceph-mon[74207]: pgmap v5: 11 pgs: 11 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:39 compute-0 ceph-mon[74207]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mon[74207]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mon[74207]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='client.14466 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:39 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-0 ceph-mon[74207]: mgrmap e15: compute-0.zcfgby(active, since 2s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:39 compute-0 ceph-mon[74207]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mon[74207]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-0 ceph-mon[74207]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-0 sudo[91576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:39 compute-0 sudo[91576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91576]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 sudo[91601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:39 compute-0 sudo[91601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91601]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 sudo[91649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:39 compute-0 sudo[91649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91649]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14472 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Nov 25 09:33:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:39 compute-0 wizardly_bouman[91476]: Option PROMETHEUS_API_HOST updated
Nov 25 09:33:39 compute-0 systemd[1]: libpod-71677b2670476928f542037daf416e04a86a1e93ff2a47479668bd523bf7add4.scope: Deactivated successfully.
Nov 25 09:33:39 compute-0 podman[91439]: 2025-11-25 09:33:39.840077759 +0000 UTC m=+0.402776441 container died 71677b2670476928f542037daf416e04a86a1e93ff2a47479668bd523bf7add4 (image=quay.io/ceph/ceph:v19, name=wizardly_bouman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ee7bbb2420be6621b0580e54f40c61c8f98d5715bb59a3d8d19758c4f8a2dce-merged.mount: Deactivated successfully.
Nov 25 09:33:39 compute-0 sudo[91674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:39 compute-0 podman[91439]: 2025-11-25 09:33:39.858471261 +0000 UTC m=+0.421169945 container remove 71677b2670476928f542037daf416e04a86a1e93ff2a47479668bd523bf7add4 (image=quay.io/ceph/ceph:v19, name=wizardly_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 25 09:33:39 compute-0 sudo[91674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91674]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 systemd[1]: libpod-conmon-71677b2670476928f542037daf416e04a86a1e93ff2a47479668bd523bf7add4.scope: Deactivated successfully.
Nov 25 09:33:39 compute-0 sudo[91363]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 sudo[91711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:39 compute-0 sudo[91711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91711]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:39 compute-0 sudo[91736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:33:39 compute-0 sudo[91736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91736]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:39 compute-0 sudo[91799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thidpyzpzzaxmfeicxmrfgynlfutrzen ; /usr/bin/python3'
Nov 25 09:33:39 compute-0 sudo[91799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:39 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:39 compute-0 sudo[91769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:33:39 compute-0 sudo[91769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:39 compute-0 sudo[91769]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 sudo[91812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-0 sudo[91812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[91812]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 sudo[91837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:40 compute-0 sudo[91837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[91837]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 python3[91809]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:40 compute-0 sudo[91862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-0 sudo[91862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[91862]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 podman[91879]: 2025-11-25 09:33:40.142653184 +0000 UTC m=+0.030231510 container create 9f88e0cb6219a55f00910656077dbe1531f40dd84a590d108ee05f8eca17fa0d (image=quay.io/ceph/ceph:v19, name=jolly_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:40 compute-0 systemd[1]: Started libpod-conmon-9f88e0cb6219a55f00910656077dbe1531f40dd84a590d108ee05f8eca17fa0d.scope.
Nov 25 09:33:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5201532cf8dbe382a4af6d5905bf10bb95bffee1cc1a71476b62618ddd884d45/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5201532cf8dbe382a4af6d5905bf10bb95bffee1cc1a71476b62618ddd884d45/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5201532cf8dbe382a4af6d5905bf10bb95bffee1cc1a71476b62618ddd884d45/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:40 compute-0 podman[91879]: 2025-11-25 09:33:40.18429445 +0000 UTC m=+0.071872786 container init 9f88e0cb6219a55f00910656077dbe1531f40dd84a590d108ee05f8eca17fa0d (image=quay.io/ceph/ceph:v19, name=jolly_allen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Nov 25 09:33:40 compute-0 podman[91879]: 2025-11-25 09:33:40.190967509 +0000 UTC m=+0.078545825 container start 9f88e0cb6219a55f00910656077dbe1531f40dd84a590d108ee05f8eca17fa0d (image=quay.io/ceph/ceph:v19, name=jolly_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:33:40 compute-0 podman[91879]: 2025-11-25 09:33:40.192101968 +0000 UTC m=+0.079680304 container attach 9f88e0cb6219a55f00910656077dbe1531f40dd84a590d108ee05f8eca17fa0d (image=quay.io/ceph/ceph:v19, name=jolly_allen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:40 compute-0 sudo[91925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-0 sudo[91925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[91925]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 podman[91879]: 2025-11-25 09:33:40.132213805 +0000 UTC m=+0.019792141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:40 compute-0 sudo[91951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-0 sudo[91951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[91951]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 sudo[91986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 sudo[91986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[91986]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 sudo[92020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:40 compute-0 sudo[92020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[92020]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 sudo[92045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:40 compute-0 sudo[92045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[92045]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 sudo[92070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-0 sudo[92070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[92070]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14478 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Nov 25 09:33:40 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 jolly_allen[91922]: Option GRAFANA_API_URL updated
Nov 25 09:33:40 compute-0 sudo[92095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:40 compute-0 sudo[92095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[92095]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 systemd[1]: libpod-9f88e0cb6219a55f00910656077dbe1531f40dd84a590d108ee05f8eca17fa0d.scope: Deactivated successfully.
Nov 25 09:33:40 compute-0 podman[91879]: 2025-11-25 09:33:40.484143906 +0000 UTC m=+0.371722232 container died 9f88e0cb6219a55f00910656077dbe1531f40dd84a590d108ee05f8eca17fa0d (image=quay.io/ceph/ceph:v19, name=jolly_allen, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:33:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5201532cf8dbe382a4af6d5905bf10bb95bffee1cc1a71476b62618ddd884d45-merged.mount: Deactivated successfully.
Nov 25 09:33:40 compute-0 podman[91879]: 2025-11-25 09:33:40.507987172 +0000 UTC m=+0.395565488 container remove 9f88e0cb6219a55f00910656077dbe1531f40dd84a590d108ee05f8eca17fa0d (image=quay.io/ceph/ceph:v19, name=jolly_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:40 compute-0 systemd[1]: libpod-conmon-9f88e0cb6219a55f00910656077dbe1531f40dd84a590d108ee05f8eca17fa0d.scope: Deactivated successfully.
Nov 25 09:33:40 compute-0 sudo[91799]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 sudo[92122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-0 sudo[92122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[92122]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 sudo[92180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-0 sudo[92180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[92180]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 sudo[92228]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbgkratakuzvalvqwxufzlbrmbdvnven ; /usr/bin/python3'
Nov 25 09:33:40 compute-0 sudo[92228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:40 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v6: 11 pgs: 11 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:40 compute-0 sudo[92229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:40 compute-0 sudo[92229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[92229]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 sudo[92256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 sudo[92256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:40 compute-0 sudo[92256]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:40 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:40 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:40 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 python3[92238]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:40 compute-0 podman[92281]: 2025-11-25 09:33:40.78306896 +0000 UTC m=+0.027659030 container create 15a7ff25b2e4e4c6f26c019dfdbcb049ee532cdc8259f1d4294ccd5039a40547 (image=quay.io/ceph/ceph:v19, name=cranky_blackburn, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:40 compute-0 systemd[1]: Started libpod-conmon-15a7ff25b2e4e4c6f26c019dfdbcb049ee532cdc8259f1d4294ccd5039a40547.scope.
Nov 25 09:33:40 compute-0 ceph-mon[74207]: from='client.14472 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:40 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 ceph-mon[74207]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 ceph-mon[74207]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 ceph-mon[74207]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 ceph-mon[74207]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 ceph-mon[74207]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 ceph-mon[74207]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:40 compute-0 ceph-mon[74207]: from='client.14478 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:40 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381b08b67e39b1860ce5756a90ebacc81f4954041c733af485f5ea306691c78c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381b08b67e39b1860ce5756a90ebacc81f4954041c733af485f5ea306691c78c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381b08b67e39b1860ce5756a90ebacc81f4954041c733af485f5ea306691c78c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:40 compute-0 podman[92281]: 2025-11-25 09:33:40.837927582 +0000 UTC m=+0.082517673 container init 15a7ff25b2e4e4c6f26c019dfdbcb049ee532cdc8259f1d4294ccd5039a40547 (image=quay.io/ceph/ceph:v19, name=cranky_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:40 compute-0 podman[92281]: 2025-11-25 09:33:40.841973729 +0000 UTC m=+0.086563800 container start 15a7ff25b2e4e4c6f26c019dfdbcb049ee532cdc8259f1d4294ccd5039a40547 (image=quay.io/ceph/ceph:v19, name=cranky_blackburn, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:33:40 compute-0 podman[92281]: 2025-11-25 09:33:40.84314666 +0000 UTC m=+0.087736731 container attach 15a7ff25b2e4e4c6f26c019dfdbcb049ee532cdc8259f1d4294ccd5039a40547 (image=quay.io/ceph/ceph:v19, name=cranky_blackburn, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:40 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:40 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 podman[92281]: 2025-11-25 09:33:40.771464013 +0000 UTC m=+0.016054104 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:33:40 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:40 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 1d46115a-b5e0-48ae-ab52-98fcd7d1d0bf (Updating node-exporter deployment (+2 -> 3))
Nov 25 09:33:40 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Nov 25 09:33:40 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Nov 25 09:33:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Nov 25 09:33:41 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/321985415' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 25 09:33:41 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/321985415' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 25 09:33:41 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.zcfgby(active, since 4s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:41 compute-0 systemd[1]: libpod-15a7ff25b2e4e4c6f26c019dfdbcb049ee532cdc8259f1d4294ccd5039a40547.scope: Deactivated successfully.
Nov 25 09:33:41 compute-0 podman[92281]: 2025-11-25 09:33:41.498040268 +0000 UTC m=+0.742630339 container died 15a7ff25b2e4e4c6f26c019dfdbcb049ee532cdc8259f1d4294ccd5039a40547 (image=quay.io/ceph/ceph:v19, name=cranky_blackburn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:33:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-381b08b67e39b1860ce5756a90ebacc81f4954041c733af485f5ea306691c78c-merged.mount: Deactivated successfully.
Nov 25 09:33:41 compute-0 podman[92281]: 2025-11-25 09:33:41.517025656 +0000 UTC m=+0.761615727 container remove 15a7ff25b2e4e4c6f26c019dfdbcb049ee532cdc8259f1d4294ccd5039a40547 (image=quay.io/ceph/ceph:v19, name=cranky_blackburn, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:33:41 compute-0 systemd[1]: libpod-conmon-15a7ff25b2e4e4c6f26c019dfdbcb049ee532cdc8259f1d4294ccd5039a40547.scope: Deactivated successfully.
Nov 25 09:33:41 compute-0 sudo[92228]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:41 compute-0 sshd-session[90597]: Connection closed by 192.168.122.100 port 42080
Nov 25 09:33:41 compute-0 sshd-session[90575]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:33:41 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 25 09:33:41 compute-0 systemd[1]: session-34.scope: Consumed 3.220s CPU time.
Nov 25 09:33:41 compute-0 systemd-logind[744]: Session 34 logged out. Waiting for processes to exit.
Nov 25 09:33:41 compute-0 systemd-logind[744]: Removed session 34.
Nov 25 09:33:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ignoring --setuser ceph since I am not root
Nov 25 09:33:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ignoring --setgroup ceph since I am not root
Nov 25 09:33:41 compute-0 ceph-mgr[74476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 09:33:41 compute-0 ceph-mgr[74476]: pidfile_write: ignore empty --pid-file
Nov 25 09:33:41 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'alerts'
Nov 25 09:33:41 compute-0 sudo[92371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtvbohbmbjyyjfdjmkbpawcuvcethdbe ; /usr/bin/python3'
Nov 25 09:33:41 compute-0 sudo[92371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:41.666+0000 7f7903287140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:41 compute-0 ceph-mgr[74476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:41 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'balancer'
Nov 25 09:33:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:41.736+0000 7f7903287140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:41 compute-0 ceph-mgr[74476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:41 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'cephadm'
Nov 25 09:33:41 compute-0 python3[92373]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:41 compute-0 podman[92374]: 2025-11-25 09:33:41.781241 +0000 UTC m=+0.027794075 container create 335a471813beb3966c09d2db485fa468f28bcaa0db69eb226cb6f86445c4b60a (image=quay.io/ceph/ceph:v19, name=nifty_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:33:41 compute-0 systemd[1]: Started libpod-conmon-335a471813beb3966c09d2db485fa468f28bcaa0db69eb226cb6f86445c4b60a.scope.
Nov 25 09:33:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cdbd284cec7bac2ae7ba916db5dffb984993c4631bab77687bd49c54b7ad7a0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cdbd284cec7bac2ae7ba916db5dffb984993c4631bab77687bd49c54b7ad7a0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cdbd284cec7bac2ae7ba916db5dffb984993c4631bab77687bd49c54b7ad7a0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:41 compute-0 podman[92374]: 2025-11-25 09:33:41.832421909 +0000 UTC m=+0.078974984 container init 335a471813beb3966c09d2db485fa468f28bcaa0db69eb226cb6f86445c4b60a (image=quay.io/ceph/ceph:v19, name=nifty_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:41 compute-0 podman[92374]: 2025-11-25 09:33:41.836875184 +0000 UTC m=+0.083428249 container start 335a471813beb3966c09d2db485fa468f28bcaa0db69eb226cb6f86445c4b60a (image=quay.io/ceph/ceph:v19, name=nifty_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:33:41 compute-0 podman[92374]: 2025-11-25 09:33:41.837981319 +0000 UTC m=+0.084534384 container attach 335a471813beb3966c09d2db485fa468f28bcaa0db69eb226cb6f86445c4b60a (image=quay.io/ceph/ceph:v19, name=nifty_williamson, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:41 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:41 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:41 compute-0 ceph-mon[74207]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:41 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/321985415' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 25 09:33:41 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/321985415' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 25 09:33:41 compute-0 ceph-mon[74207]: mgrmap e16: compute-0.zcfgby(active, since 4s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:41 compute-0 podman[92374]: 2025-11-25 09:33:41.770484753 +0000 UTC m=+0.017037808 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Nov 25 09:33:42 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2461625104' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 25 09:33:42 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'crash'
Nov 25 09:33:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:42.400+0000 7f7903287140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:42 compute-0 ceph-mgr[74476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:42 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'dashboard'
Nov 25 09:33:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:42 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'devicehealth'
Nov 25 09:33:42 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2461625104' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 25 09:33:42 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2461625104' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 25 09:33:42 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.zcfgby(active, since 6s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:42 compute-0 systemd[1]: libpod-335a471813beb3966c09d2db485fa468f28bcaa0db69eb226cb6f86445c4b60a.scope: Deactivated successfully.
Nov 25 09:33:42 compute-0 podman[92374]: 2025-11-25 09:33:42.898098159 +0000 UTC m=+1.144651224 container died 335a471813beb3966c09d2db485fa468f28bcaa0db69eb226cb6f86445c4b60a (image=quay.io/ceph/ceph:v19, name=nifty_williamson, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cdbd284cec7bac2ae7ba916db5dffb984993c4631bab77687bd49c54b7ad7a0-merged.mount: Deactivated successfully.
Nov 25 09:33:42 compute-0 podman[92374]: 2025-11-25 09:33:42.917416225 +0000 UTC m=+1.163969290 container remove 335a471813beb3966c09d2db485fa468f28bcaa0db69eb226cb6f86445c4b60a (image=quay.io/ceph/ceph:v19, name=nifty_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:42 compute-0 systemd[1]: libpod-conmon-335a471813beb3966c09d2db485fa468f28bcaa0db69eb226cb6f86445c4b60a.scope: Deactivated successfully.
Nov 25 09:33:42 compute-0 sudo[92371]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:42.939+0000 7f7903287140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:42 compute-0 ceph-mgr[74476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:42 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 09:33:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 09:33:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 09:33:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   from numpy import show_config as show_numpy_config
Nov 25 09:33:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:43.078+0000 7f7903287140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-0 ceph-mgr[74476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'influx'
Nov 25 09:33:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:43.139+0000 7f7903287140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-0 ceph-mgr[74476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'insights'
Nov 25 09:33:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'iostat'
Nov 25 09:33:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:43.253+0000 7f7903287140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-0 ceph-mgr[74476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'k8sevents'
Nov 25 09:33:43 compute-0 python3[92506]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:33:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'localpool'
Nov 25 09:33:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 09:33:43 compute-0 python3[92577]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063223.2976227-37714-77681048526883/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:33:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'mirroring'
Nov 25 09:33:43 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2461625104' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 25 09:33:43 compute-0 ceph-mon[74207]: mgrmap e17: compute-0.zcfgby(active, since 6s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'nfs'
Nov 25 09:33:44 compute-0 sudo[92625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsbitmdnmobthogeyumyxrpldppanfhx ; /usr/bin/python3'
Nov 25 09:33:44 compute-0 sudo[92625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:44.095+0000 7f7903287140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'orchestrator'
Nov 25 09:33:44 compute-0 python3[92627]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:44 compute-0 podman[92628]: 2025-11-25 09:33:44.19035003 +0000 UTC m=+0.027749852 container create 9a41ae72d3e3202e6ca14d93cb7c1b1cbd15be76da0da60d27381214310a9bd6 (image=quay.io/ceph/ceph:v19, name=sharp_jang, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:44 compute-0 systemd[1]: Started libpod-conmon-9a41ae72d3e3202e6ca14d93cb7c1b1cbd15be76da0da60d27381214310a9bd6.scope.
Nov 25 09:33:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dd7581ed5e80de6a1b240bab16cda79ee8c70369732801fa6e3b324efe2c2f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dd7581ed5e80de6a1b240bab16cda79ee8c70369732801fa6e3b324efe2c2f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dd7581ed5e80de6a1b240bab16cda79ee8c70369732801fa6e3b324efe2c2f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:44 compute-0 podman[92628]: 2025-11-25 09:33:44.246536745 +0000 UTC m=+0.083936567 container init 9a41ae72d3e3202e6ca14d93cb7c1b1cbd15be76da0da60d27381214310a9bd6 (image=quay.io/ceph/ceph:v19, name=sharp_jang, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:33:44 compute-0 podman[92628]: 2025-11-25 09:33:44.251493829 +0000 UTC m=+0.088893641 container start 9a41ae72d3e3202e6ca14d93cb7c1b1cbd15be76da0da60d27381214310a9bd6 (image=quay.io/ceph/ceph:v19, name=sharp_jang, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:44 compute-0 podman[92628]: 2025-11-25 09:33:44.252729839 +0000 UTC m=+0.090129661 container attach 9a41ae72d3e3202e6ca14d93cb7c1b1cbd15be76da0da60d27381214310a9bd6 (image=quay.io/ceph/ceph:v19, name=sharp_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:33:44 compute-0 podman[92628]: 2025-11-25 09:33:44.179637005 +0000 UTC m=+0.017036837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:44.281+0000 7f7903287140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 09:33:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:44.347+0000 7f7903287140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'osd_support'
Nov 25 09:33:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:44.404+0000 7f7903287140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 09:33:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:44.472+0000 7f7903287140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'progress'
Nov 25 09:33:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:44.533+0000 7f7903287140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'prometheus'
Nov 25 09:33:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:44.827+0000 7f7903287140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rbd_support'
Nov 25 09:33:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:44.907+0000 7f7903287140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'restful'
Nov 25 09:33:45 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rgw'
Nov 25 09:33:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:45.263+0000 7f7903287140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-0 ceph-mgr[74476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rook'
Nov 25 09:33:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:45.734+0000 7f7903287140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-0 ceph-mgr[74476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'selftest'
Nov 25 09:33:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:45.795+0000 7f7903287140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-0 ceph-mgr[74476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'snap_schedule'
Nov 25 09:33:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:45.865+0000 7f7903287140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-0 ceph-mgr[74476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'stats'
Nov 25 09:33:45 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'status'
Nov 25 09:33:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:45.993+0000 7f7903287140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-0 ceph-mgr[74476]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:45 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'telegraf'
Nov 25 09:33:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:46.053+0000 7f7903287140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'telemetry'
Nov 25 09:33:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:46.186+0000 7f7903287140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 09:33:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:46.371+0000 7f7903287140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'volumes'
Nov 25 09:33:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:46.591+0000 7f7903287140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'zabbix'
Nov 25 09:33:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:46.650+0000 7f7903287140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Active manager daemon compute-0.zcfgby restarted
Nov 25 09:33:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 25 09:33:46 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zcfgby
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: ms_deliver_dispatch: unhandled message 0x55c54c533860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr respawn  1: '-n'
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr respawn  2: 'mgr.compute-0.zcfgby'
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr respawn  3: '-f'
Nov 25 09:33:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 25 09:33:46 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 25 09:33:46 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.zcfgby(active, starting, since 0.0220563s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:46 compute-0 ceph-mon[74207]: Active manager daemon compute-0.zcfgby restarted
Nov 25 09:33:46 compute-0 ceph-mon[74207]: Activating manager daemon compute-0.zcfgby
Nov 25 09:33:46 compute-0 ceph-mon[74207]: osdmap e36: 3 total, 3 up, 3 in
Nov 25 09:33:46 compute-0 ceph-mon[74207]: mgrmap e18: compute-0.zcfgby(active, starting, since 0.0220563s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:33:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ignoring --setuser ceph since I am not root
Nov 25 09:33:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ignoring --setgroup ceph since I am not root
Nov 25 09:33:46 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.flybft restarted
Nov 25 09:33:46 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.flybft started
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: pidfile_write: ignore empty --pid-file
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'alerts'
Nov 25 09:33:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:46.832+0000 7f4e2a13f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'balancer'
Nov 25 09:33:46 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.plffrn restarted
Nov 25 09:33:46 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.plffrn started
Nov 25 09:33:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:46.902+0000 7f4e2a13f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:33:46 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'cephadm'
Nov 25 09:33:47 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'crash'
Nov 25 09:33:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:47.559+0000 7f4e2a13f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:47 compute-0 ceph-mgr[74476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:33:47 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'dashboard'
Nov 25 09:33:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:47 compute-0 ceph-mon[74207]: Standby manager daemon compute-2.flybft restarted
Nov 25 09:33:47 compute-0 ceph-mon[74207]: Standby manager daemon compute-2.flybft started
Nov 25 09:33:47 compute-0 ceph-mon[74207]: Standby manager daemon compute-1.plffrn restarted
Nov 25 09:33:47 compute-0 ceph-mon[74207]: Standby manager daemon compute-1.plffrn started
Nov 25 09:33:47 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.zcfgby(active, starting, since 1.07823s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'devicehealth'
Nov 25 09:33:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:48.097+0000 7f4e2a13f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 09:33:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 09:33:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 09:33:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   from numpy import show_config as show_numpy_config
Nov 25 09:33:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:48.237+0000 7f4e2a13f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'influx'
Nov 25 09:33:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:48.295+0000 7f4e2a13f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'insights'
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'iostat'
Nov 25 09:33:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:48.414+0000 7f4e2a13f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'k8sevents'
Nov 25 09:33:48 compute-0 ceph-mon[74207]: mgrmap e19: compute-0.zcfgby(active, starting, since 1.07823s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'localpool'
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 09:33:48 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'mirroring'
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'nfs'
Nov 25 09:33:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:49.258+0000 7f4e2a13f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'orchestrator'
Nov 25 09:33:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:49.439+0000 7f4e2a13f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 09:33:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:49.500+0000 7f4e2a13f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'osd_support'
Nov 25 09:33:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:49.555+0000 7f4e2a13f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 09:33:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:49.623+0000 7f4e2a13f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'progress'
Nov 25 09:33:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:49.683+0000 7f4e2a13f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'prometheus'
Nov 25 09:33:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:49.976+0000 7f4e2a13f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:33:49 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rbd_support'
Nov 25 09:33:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:50.062+0000 7f4e2a13f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-0 ceph-mgr[74476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'restful'
Nov 25 09:33:50 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rgw'
Nov 25 09:33:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:50.431+0000 7f4e2a13f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-0 ceph-mgr[74476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rook'
Nov 25 09:33:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:50.903+0000 7f4e2a13f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-0 ceph-mgr[74476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'selftest'
Nov 25 09:33:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:50.965+0000 7f4e2a13f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-0 ceph-mgr[74476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:33:50 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'snap_schedule'
Nov 25 09:33:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:51.035+0000 7f4e2a13f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'stats'
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'status'
Nov 25 09:33:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:51.164+0000 7f4e2a13f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'telegraf'
Nov 25 09:33:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:51.223+0000 7f4e2a13f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'telemetry'
Nov 25 09:33:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:51.354+0000 7f4e2a13f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 09:33:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:51.538+0000 7f4e2a13f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'volumes'
Nov 25 09:33:51 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 25 09:33:51 compute-0 systemd[75447]: Activating special unit Exit the Session...
Nov 25 09:33:51 compute-0 systemd[75447]: Stopped target Main User Target.
Nov 25 09:33:51 compute-0 systemd[75447]: Stopped target Basic System.
Nov 25 09:33:51 compute-0 systemd[75447]: Stopped target Paths.
Nov 25 09:33:51 compute-0 systemd[75447]: Stopped target Sockets.
Nov 25 09:33:51 compute-0 systemd[75447]: Stopped target Timers.
Nov 25 09:33:51 compute-0 systemd[75447]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 25 09:33:51 compute-0 systemd[75447]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 09:33:51 compute-0 systemd[75447]: Closed D-Bus User Message Bus Socket.
Nov 25 09:33:51 compute-0 systemd[75447]: Stopped Create User's Volatile Files and Directories.
Nov 25 09:33:51 compute-0 systemd[75447]: Removed slice User Application Slice.
Nov 25 09:33:51 compute-0 systemd[75447]: Reached target Shutdown.
Nov 25 09:33:51 compute-0 systemd[75447]: Finished Exit the Session.
Nov 25 09:33:51 compute-0 systemd[75447]: Reached target Exit the Session.
Nov 25 09:33:51 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 25 09:33:51 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 25 09:33:51 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 25 09:33:51 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 25 09:33:51 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 25 09:33:51 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 25 09:33:51 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 25 09:33:51 compute-0 systemd[1]: user-42477.slice: Consumed 24.463s CPU time.
Nov 25 09:33:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:51.762+0000 7f4e2a13f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'zabbix'
Nov 25 09:33:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:33:51.823+0000 7f4e2a13f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Active manager daemon compute-0.zcfgby restarted
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zcfgby
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: ms_deliver_dispatch: unhandled message 0x55e5ad72b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr handle_mgr_map Activating!
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.zcfgby(active, starting, since 0.0170664s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr handle_mgr_map I am now activating
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: balancer
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Manager daemon compute-0.zcfgby is now available
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [balancer INFO root] Starting
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:33:51
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: cephadm
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: crash
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: dashboard
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [dashboard INFO access_control] Loading user roles DB version=2
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [dashboard INFO sso] Loading SSO DB version=1
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: devicehealth
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [devicehealth INFO root] Starting
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: iostat
Nov 25 09:33:51 compute-0 ceph-mon[74207]: Active manager daemon compute-0.zcfgby restarted
Nov 25 09:33:51 compute-0 ceph-mon[74207]: Activating manager daemon compute-0.zcfgby
Nov 25 09:33:51 compute-0 ceph-mon[74207]: osdmap e37: 3 total, 3 up, 3 in
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mgrmap e20: compute-0.zcfgby(active, starting, since 0.0170664s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:51 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mon[74207]: Manager daemon compute-0.zcfgby is now available
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: nfs
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: orchestrator
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: pg_autoscaler
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: progress
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [progress INFO root] Loading...
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f4dce4d56d0>, <progress.module.GhostEvent object at 0x7f4dce4d57f0>, <progress.module.GhostEvent object at 0x7f4dce4d57c0>, <progress.module.GhostEvent object at 0x7f4dce4d5790>, <progress.module.GhostEvent object at 0x7f4dce4d5b50>] historic events
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [rbd_support INFO root] recovery thread starting
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [rbd_support INFO root] starting setup
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: rbd_support
Nov 25 09:33:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"} v 0)
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: restful
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: status
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [restful WARNING root] server not running: no certificate configured
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: telemetry
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.flybft restarted
Nov 25 09:33:51 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.flybft started
Nov 25 09:33:51 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] PerfHandler: starting
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.plffrn restarted
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.plffrn started
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TaskHandler: starting
Nov 25 09:33:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"} v 0)
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: volumes
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [rbd_support INFO root] setup complete
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Nov 25 09:33:52 compute-0 sshd-session[92812]: Accepted publickey for ceph-admin from 192.168.122.100 port 58450 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:33:52 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 09:33:52 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 09:33:52 compute-0 systemd-logind[744]: New session 35 of user ceph-admin.
Nov 25 09:33:52 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 09:33:52 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 25 09:33:52 compute-0 systemd[92827]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.module] Engine started.
Nov 25 09:33:52 compute-0 systemd[92827]: Queued start job for default target Main User Target.
Nov 25 09:33:52 compute-0 systemd[92827]: Created slice User Application Slice.
Nov 25 09:33:52 compute-0 systemd[92827]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 09:33:52 compute-0 systemd[92827]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 09:33:52 compute-0 systemd[92827]: Reached target Paths.
Nov 25 09:33:52 compute-0 systemd[92827]: Reached target Timers.
Nov 25 09:33:52 compute-0 systemd[92827]: Starting D-Bus User Message Bus Socket...
Nov 25 09:33:52 compute-0 systemd[92827]: Starting Create User's Volatile Files and Directories...
Nov 25 09:33:52 compute-0 systemd[92827]: Listening on D-Bus User Message Bus Socket.
Nov 25 09:33:52 compute-0 systemd[92827]: Reached target Sockets.
Nov 25 09:33:52 compute-0 systemd[92827]: Finished Create User's Volatile Files and Directories.
Nov 25 09:33:52 compute-0 systemd[92827]: Reached target Basic System.
Nov 25 09:33:52 compute-0 systemd[92827]: Reached target Main User Target.
Nov 25 09:33:52 compute-0 systemd[92827]: Startup finished in 103ms.
Nov 25 09:33:52 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 25 09:33:52 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Nov 25 09:33:52 compute-0 sshd-session[92812]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:33:52 compute-0 sudo[92843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:52 compute-0 sudo[92843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:52 compute-0 sudo[92843]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:52 compute-0 sudo[92868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:33:52 compute-0 sudo[92868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.zcfgby(active, since 1.02783s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14511 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 25 09:33:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 25 09:33:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v3: 11 pgs: 11 active+clean; 454 KiB data, 84 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 25 09:33:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 25 09:33:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0[74203]: 2025-11-25T09:33:52.870+0000 7f0ff7a67640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 25 09:33:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e2 new map
Nov 25 09:33:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-11-25T09:33:52:871701+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:33:52.871685+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Nov 25 09:33:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 25 09:33:52 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:33:52 compute-0 ceph-mon[74207]: Standby manager daemon compute-2.flybft restarted
Nov 25 09:33:52 compute-0 ceph-mon[74207]: Standby manager daemon compute-2.flybft started
Nov 25 09:33:52 compute-0 ceph-mon[74207]: Standby manager daemon compute-1.plffrn restarted
Nov 25 09:33:52 compute-0 ceph-mon[74207]: Standby manager daemon compute-1.plffrn started
Nov 25 09:33:52 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:33:52 compute-0 ceph-mon[74207]: mgrmap e21: compute-0.zcfgby(active, since 1.02783s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:52 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 25 09:33:52 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 25 09:33:52 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 25 09:33:52 compute-0 ceph-mon[74207]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 09:33:52 compute-0 ceph-mon[74207]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 25 09:33:52 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 25 09:33:52 compute-0 ceph-mon[74207]: osdmap e38: 3 total, 3 up, 3 in
Nov 25 09:33:52 compute-0 ceph-mon[74207]: fsmap cephfs:0
Nov 25 09:33:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:52 compute-0 ceph-mgr[74476]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 25 09:33:52 compute-0 systemd[1]: libpod-9a41ae72d3e3202e6ca14d93cb7c1b1cbd15be76da0da60d27381214310a9bd6.scope: Deactivated successfully.
Nov 25 09:33:52 compute-0 podman[92628]: 2025-11-25 09:33:52.913750504 +0000 UTC m=+8.751150317 container died 9a41ae72d3e3202e6ca14d93cb7c1b1cbd15be76da0da60d27381214310a9bd6 (image=quay.io/ceph/ceph:v19, name=sharp_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 09:33:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9dd7581ed5e80de6a1b240bab16cda79ee8c70369732801fa6e3b324efe2c2f-merged.mount: Deactivated successfully.
Nov 25 09:33:52 compute-0 podman[92628]: 2025-11-25 09:33:52.937982032 +0000 UTC m=+8.775381845 container remove 9a41ae72d3e3202e6ca14d93cb7c1b1cbd15be76da0da60d27381214310a9bd6 (image=quay.io/ceph/ceph:v19, name=sharp_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:33:52 compute-0 systemd[1]: libpod-conmon-9a41ae72d3e3202e6ca14d93cb7c1b1cbd15be76da0da60d27381214310a9bd6.scope: Deactivated successfully.
Nov 25 09:33:52 compute-0 sudo[92625]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:53 compute-0 podman[92960]: 2025-11-25 09:33:53.002911609 +0000 UTC m=+0.035257664 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:53 compute-0 sudo[93000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wicafutrygtwnwmrhxfpnjsdszrbkatf ; /usr/bin/python3'
Nov 25 09:33:53 compute-0 sudo[93000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:53 compute-0 podman[92960]: 2025-11-25 09:33:53.082140321 +0000 UTC m=+0.114486375 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:33:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 python3[93002]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:53 compute-0 podman[93038]: 2025-11-25 09:33:53.25312588 +0000 UTC m=+0.027539584 container create 11fbb131e4bd6bc526e2c06e4ef0b2351579ed992217fe392c8eef153f831f1b (image=quay.io/ceph/ceph:v19, name=great_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:53 compute-0 systemd[1]: Started libpod-conmon-11fbb131e4bd6bc526e2c06e4ef0b2351579ed992217fe392c8eef153f831f1b.scope.
Nov 25 09:33:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f2bedf1fb8e25de00aae0a8c410f39f6dd4e2ed9357daa7838ae83a14d5b73b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f2bedf1fb8e25de00aae0a8c410f39f6dd4e2ed9357daa7838ae83a14d5b73b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f2bedf1fb8e25de00aae0a8c410f39f6dd4e2ed9357daa7838ae83a14d5b73b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:53 compute-0 podman[93038]: 2025-11-25 09:33:53.30898611 +0000 UTC m=+0.083399835 container init 11fbb131e4bd6bc526e2c06e4ef0b2351579ed992217fe392c8eef153f831f1b (image=quay.io/ceph/ceph:v19, name=great_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:53 compute-0 podman[93038]: 2025-11-25 09:33:53.314494253 +0000 UTC m=+0.088907959 container start 11fbb131e4bd6bc526e2c06e4ef0b2351579ed992217fe392c8eef153f831f1b (image=quay.io/ceph/ceph:v19, name=great_tesla, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:33:53 compute-0 podman[93038]: 2025-11-25 09:33:53.315757745 +0000 UTC m=+0.090171449 container attach 11fbb131e4bd6bc526e2c06e4ef0b2351579ed992217fe392c8eef153f831f1b (image=quay.io/ceph/ceph:v19, name=great_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:33:53] ENGINE Bus STARTING
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:33:53] ENGINE Bus STARTING
Nov 25 09:33:53 compute-0 podman[93038]: 2025-11-25 09:33:53.241388314 +0000 UTC m=+0.015802039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:33:53] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:33:53] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:33:53] ENGINE Client ('192.168.122.100', 59432) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:33:53] ENGINE Client ('192.168.122.100', 59432) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:33:53 compute-0 podman[93155]: 2025-11-25 09:33:53.489955359 +0000 UTC m=+0.033816587 container exec dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:53 compute-0 podman[93155]: 2025-11-25 09:33:53.498135609 +0000 UTC m=+0.041996838 container exec_died dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:33:53 compute-0 sudo[92868]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:33:53] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:33:53] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:33:53] ENGINE Bus STARTED
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:33:53] ENGINE Bus STARTED
Nov 25 09:33:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14553 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 25 09:33:53 compute-0 sudo[93184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 great_tesla[93064]: Scheduled mds.cephfs update...
Nov 25 09:33:53 compute-0 sudo[93184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:53 compute-0 sudo[93184]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:53 compute-0 systemd[1]: libpod-11fbb131e4bd6bc526e2c06e4ef0b2351579ed992217fe392c8eef153f831f1b.scope: Deactivated successfully.
Nov 25 09:33:53 compute-0 podman[93038]: 2025-11-25 09:33:53.619204441 +0000 UTC m=+0.393618146 container died 11fbb131e4bd6bc526e2c06e4ef0b2351579ed992217fe392c8eef153f831f1b (image=quay.io/ceph/ceph:v19, name=great_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f2bedf1fb8e25de00aae0a8c410f39f6dd4e2ed9357daa7838ae83a14d5b73b-merged.mount: Deactivated successfully.
Nov 25 09:33:53 compute-0 podman[93038]: 2025-11-25 09:33:53.64029017 +0000 UTC m=+0.414703875 container remove 11fbb131e4bd6bc526e2c06e4ef0b2351579ed992217fe392c8eef153f831f1b (image=quay.io/ceph/ceph:v19, name=great_tesla, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 25 09:33:53 compute-0 sudo[93211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:33:53 compute-0 sudo[93211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:53 compute-0 systemd[1]: libpod-conmon-11fbb131e4bd6bc526e2c06e4ef0b2351579ed992217fe392c8eef153f831f1b.scope: Deactivated successfully.
Nov 25 09:33:53 compute-0 sudo[93000]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:53 compute-0 sudo[93268]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csshxhaglxstucspldprpieliwzyadao ; /usr/bin/python3'
Nov 25 09:33:53 compute-0 sudo[93268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v5: 11 pgs: 11 active+clean; 454 KiB data, 84 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:53 compute-0 python3[93270]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:53 compute-0 ceph-mon[74207]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:53 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: [25/Nov/2025:09:33:53] ENGINE Bus STARTING
Nov 25 09:33:53 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: [25/Nov/2025:09:33:53] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:33:53 compute-0 ceph-mon[74207]: [25/Nov/2025:09:33:53] ENGINE Client ('192.168.122.100', 59432) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:33:53 compute-0 ceph-mon[74207]: [25/Nov/2025:09:33:53] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:33:53 compute-0 ceph-mon[74207]: [25/Nov/2025:09:33:53] ENGINE Bus STARTED
Nov 25 09:33:53 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: from='client.14553 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:53 compute-0 ceph-mon[74207]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:53 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 25 09:33:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:53 compute-0 podman[93282]: 2025-11-25 09:33:53.923024372 +0000 UTC m=+0.034549980 container create fe7b08f4055ed44663f7840520969ad9c77cf025c26bc05fdae4c9a59dc85352 (image=quay.io/ceph/ceph:v19, name=hopeful_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:53 compute-0 ceph-mgr[74476]: [devicehealth INFO root] Check health
Nov 25 09:33:53 compute-0 systemd[1]: Started libpod-conmon-fe7b08f4055ed44663f7840520969ad9c77cf025c26bc05fdae4c9a59dc85352.scope.
Nov 25 09:33:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92f01a8528f95dca27dc7f2aecbe2e82b88132f5eafccfe881ec80df0facb8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92f01a8528f95dca27dc7f2aecbe2e82b88132f5eafccfe881ec80df0facb8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92f01a8528f95dca27dc7f2aecbe2e82b88132f5eafccfe881ec80df0facb8b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:53 compute-0 podman[93282]: 2025-11-25 09:33:53.987789649 +0000 UTC m=+0.099315267 container init fe7b08f4055ed44663f7840520969ad9c77cf025c26bc05fdae4c9a59dc85352 (image=quay.io/ceph/ceph:v19, name=hopeful_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:33:53 compute-0 podman[93282]: 2025-11-25 09:33:53.99259645 +0000 UTC m=+0.104122058 container start fe7b08f4055ed44663f7840520969ad9c77cf025c26bc05fdae4c9a59dc85352 (image=quay.io/ceph/ceph:v19, name=hopeful_ganguly, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 25 09:33:53 compute-0 podman[93282]: 2025-11-25 09:33:53.993918061 +0000 UTC m=+0.105443669 container attach fe7b08f4055ed44663f7840520969ad9c77cf025c26bc05fdae4c9a59dc85352 (image=quay.io/ceph/ceph:v19, name=hopeful_ganguly, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 09:33:54 compute-0 podman[93282]: 2025-11-25 09:33:53.91247732 +0000 UTC m=+0.024002939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:54 compute-0 sudo[93211]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 sudo[93346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:54 compute-0 sudo[93346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93346]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 sudo[93371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 25 09:33:54 compute-0 sudo[93371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 25 09:33:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14562 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Nov 25 09:33:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 25 09:33:54 compute-0 sudo[93371]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 25 09:33:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:54 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:33:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:33:54 compute-0 sudo[93415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:33:54 compute-0 sudo[93415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93415]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 sudo[93440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:33:54 compute-0 sudo[93440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93440]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 sudo[93465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:54 compute-0 sudo[93465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93465]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 sudo[93490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:54 compute-0 sudo[93490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93490]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 sudo[93515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:54 compute-0 sudo[93515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93515]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.zcfgby(active, since 2s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:54 compute-0 sudo[93563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:54 compute-0 sudo[93563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93563]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 sudo[93588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:33:54 compute-0 sudo[93588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93588]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 sudo[93613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 25 09:33:54 compute-0 sudo[93613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93613]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:54 compute-0 sudo[93638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:54 compute-0 sudo[93638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93638]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:54 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:54 compute-0 sudo[93663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:54 compute-0 sudo[93663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93663]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 sudo[93688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:54 compute-0 sudo[93688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93688]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 ceph-mon[74207]: pgmap v5: 11 pgs: 11 active+clean; 454 KiB data, 84 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='client.14562 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:54 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:33:54 compute-0 ceph-mon[74207]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:33:54 compute-0 ceph-mon[74207]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:33:54 compute-0 ceph-mon[74207]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:33:54 compute-0 ceph-mon[74207]: mgrmap e22: compute-0.zcfgby(active, since 2s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:54 compute-0 sudo[93713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:54 compute-0 sudo[93713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93713]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:54 compute-0 sudo[93738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:54 compute-0 sudo[93738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:54 compute-0 sudo[93738]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[93786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:55 compute-0 sudo[93786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[93786]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[93811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:33:55 compute-0 sudo[93811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[93811]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[93836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:55 compute-0 sudo[93836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[93836]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 sudo[93861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:33:55 compute-0 sudo[93861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[93861]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 25 09:33:55 compute-0 sudo[93886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:33:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 25 09:33:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 25 09:33:55 compute-0 sudo[93886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 25 09:33:55 compute-0 sudo[93886]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Nov 25 09:33:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 25 09:33:55 compute-0 sudo[93911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-0 sudo[93911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[93911]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[93936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:55 compute-0 sudo[93936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[93936]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[93961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-0 sudo[93961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[93961]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[94009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-0 sudo[94009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[94009]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[94034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-0 sudo[94034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[94034]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[94059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 sudo[94059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[94059]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 sudo[94084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:55 compute-0 sudo[94084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[94084]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 sudo[94109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:33:55 compute-0 sudo[94109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[94109]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 sudo[94134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-0 sudo[94134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[94134]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[94159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:33:55 compute-0 sudo[94159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[94159]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[94184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-0 sudo[94184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[94184]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[94232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-0 sudo[94232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[94232]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[94257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:33:55 compute-0 sudo[94257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[94257]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 sudo[94282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:55 compute-0 sudo[94282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:55 compute-0 sudo[94282]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:33:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:33:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v7: 12 pgs: 1 unknown, 11 active+clean; 454 KiB data, 84 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:33:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:33:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:33:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev b4c56abf-0978-4d7d-b824-4afbd48fa7ef (Updating node-exporter deployment (+1 -> 3))
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Nov 25 09:33:55 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Nov 25 09:33:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 25 09:33:56 compute-0 ceph-mon[74207]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:56 compute-0 ceph-mon[74207]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:56 compute-0 ceph-mon[74207]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:33:56 compute-0 ceph-mon[74207]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:56 compute-0 ceph-mon[74207]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:56 compute-0 ceph-mon[74207]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:33:56 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 25 09:33:56 compute-0 ceph-mon[74207]: osdmap e39: 3 total, 3 up, 3 in
Nov 25 09:33:56 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 25 09:33:56 compute-0 ceph-mon[74207]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:56 compute-0 ceph-mon[74207]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:56 compute-0 ceph-mon[74207]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:33:56 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 25 09:33:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 25 09:33:56 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 25 09:33:56 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.zcfgby(active, since 4s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:56 compute-0 ceph-mgr[74476]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Nov 25 09:33:56 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:56 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:33:56 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-0 ceph-mgr[74476]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:56 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 25 09:33:56 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:56 compute-0 systemd[1]: libpod-fe7b08f4055ed44663f7840520969ad9c77cf025c26bc05fdae4c9a59dc85352.scope: Deactivated successfully.
Nov 25 09:33:56 compute-0 podman[93282]: 2025-11-25 09:33:56.269724627 +0000 UTC m=+2.381250235 container died fe7b08f4055ed44663f7840520969ad9c77cf025c26bc05fdae4c9a59dc85352 (image=quay.io/ceph/ceph:v19, name=hopeful_ganguly, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 25 09:33:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b92f01a8528f95dca27dc7f2aecbe2e82b88132f5eafccfe881ec80df0facb8b-merged.mount: Deactivated successfully.
Nov 25 09:33:56 compute-0 podman[93282]: 2025-11-25 09:33:56.290460485 +0000 UTC m=+2.401986093 container remove fe7b08f4055ed44663f7840520969ad9c77cf025c26bc05fdae4c9a59dc85352 (image=quay.io/ceph/ceph:v19, name=hopeful_ganguly, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:33:56 compute-0 systemd[1]: libpod-conmon-fe7b08f4055ed44663f7840520969ad9c77cf025c26bc05fdae4c9a59dc85352.scope: Deactivated successfully.
Nov 25 09:33:56 compute-0 sudo[93268]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:56 compute-0 sudo[94403]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyxlzkhyokvvccpszsakchrbueoreduf ; /usr/bin/python3'
Nov 25 09:33:56 compute-0 sudo[94403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:56 compute-0 python3[94405]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:33:56 compute-0 sudo[94403]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:56 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:33:56 compute-0 ceph-mgr[74476]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Nov 25 09:33:56 compute-0 sudo[94476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myfzzmpgssfncidvlpxmaejxifxsblfl ; /usr/bin/python3'
Nov 25 09:33:56 compute-0 sudo[94476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:57 compute-0 python3[94478]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063236.5805056-37745-64735750046064/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=366a48c0bc0104e6b502b94bc86d9db21512d98a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:33:57 compute-0 sudo[94476]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 25 09:33:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 25 09:33:57 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 25 09:33:57 compute-0 ceph-mon[74207]: pgmap v7: 12 pgs: 1 unknown, 11 active+clean; 454 KiB data, 84 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:57 compute-0 ceph-mon[74207]: Deploying daemon node-exporter.compute-2 on compute-2
Nov 25 09:33:57 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 25 09:33:57 compute-0 ceph-mon[74207]: osdmap e40: 3 total, 3 up, 3 in
Nov 25 09:33:57 compute-0 ceph-mon[74207]: mgrmap e23: compute-0.zcfgby(active, since 4s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:57 compute-0 ceph-mon[74207]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:57 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:57 compute-0 ceph-mon[74207]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 09:33:57 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:57 compute-0 ceph-mon[74207]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 09:33:57 compute-0 sudo[94526]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iicygohyfvtbdaohafrziopdbyamscsy ; /usr/bin/python3'
Nov 25 09:33:57 compute-0 sudo[94526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:57 compute-0 python3[94528]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:57 compute-0 podman[94529]: 2025-11-25 09:33:57.413485001 +0000 UTC m=+0.025701311 container create 0c145378c7383eb1b27c37cf9ef56c550d43a71463a63e687f4e7b8f3cf7c11a (image=quay.io/ceph/ceph:v19, name=infallible_benz, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:57 compute-0 systemd[1]: Started libpod-conmon-0c145378c7383eb1b27c37cf9ef56c550d43a71463a63e687f4e7b8f3cf7c11a.scope.
Nov 25 09:33:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f686524a019ab67094558f720020eb5fb581d0887157de2571fd61ba33473d8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f686524a019ab67094558f720020eb5fb581d0887157de2571fd61ba33473d8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:57 compute-0 podman[94529]: 2025-11-25 09:33:57.454079654 +0000 UTC m=+0.066295964 container init 0c145378c7383eb1b27c37cf9ef56c550d43a71463a63e687f4e7b8f3cf7c11a (image=quay.io/ceph/ceph:v19, name=infallible_benz, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:33:57 compute-0 podman[94529]: 2025-11-25 09:33:57.46193415 +0000 UTC m=+0.074150451 container start 0c145378c7383eb1b27c37cf9ef56c550d43a71463a63e687f4e7b8f3cf7c11a (image=quay.io/ceph/ceph:v19, name=infallible_benz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 09:33:57 compute-0 podman[94529]: 2025-11-25 09:33:57.464763965 +0000 UTC m=+0.076980265 container attach 0c145378c7383eb1b27c37cf9ef56c550d43a71463a63e687f4e7b8f3cf7c11a (image=quay.io/ceph/ceph:v19, name=infallible_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:57 compute-0 podman[94529]: 2025-11-25 09:33:57.403083473 +0000 UTC m=+0.015299793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:33:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Nov 25 09:33:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3756297363' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 09:33:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3756297363' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 09:33:57 compute-0 systemd[1]: libpod-0c145378c7383eb1b27c37cf9ef56c550d43a71463a63e687f4e7b8f3cf7c11a.scope: Deactivated successfully.
Nov 25 09:33:57 compute-0 podman[94529]: 2025-11-25 09:33:57.785544166 +0000 UTC m=+0.397760467 container died 0c145378c7383eb1b27c37cf9ef56c550d43a71463a63e687f4e7b8f3cf7c11a (image=quay.io/ceph/ceph:v19, name=infallible_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 09:33:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f686524a019ab67094558f720020eb5fb581d0887157de2571fd61ba33473d8b-merged.mount: Deactivated successfully.
Nov 25 09:33:57 compute-0 podman[94529]: 2025-11-25 09:33:57.803613038 +0000 UTC m=+0.415829338 container remove 0c145378c7383eb1b27c37cf9ef56c550d43a71463a63e687f4e7b8f3cf7c11a (image=quay.io/ceph/ceph:v19, name=infallible_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:33:57 compute-0 sudo[94526]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:57 compute-0 systemd[1]: libpod-conmon-0c145378c7383eb1b27c37cf9ef56c550d43a71463a63e687f4e7b8f3cf7c11a.scope: Deactivated successfully.
Nov 25 09:33:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v10: 12 pgs: 1 unknown, 11 active+clean; 454 KiB data, 84 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:33:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:33:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 25 09:33:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:57 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev b4c56abf-0978-4d7d-b824-4afbd48fa7ef (Updating node-exporter deployment (+1 -> 3))
Nov 25 09:33:57 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event b4c56abf-0978-4d7d-b824-4afbd48fa7ef (Updating node-exporter deployment (+1 -> 3)) in 2 seconds
Nov 25 09:33:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 25 09:33:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:33:57 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:33:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:33:57 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:58 compute-0 sudo[94577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:58 compute-0 sudo[94577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:58 compute-0 sudo[94577]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:58 compute-0 sudo[94602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:33:58 compute-0 sudo[94602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:58 compute-0 ceph-mon[74207]: osdmap e41: 3 total, 3 up, 3 in
Nov 25 09:33:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3756297363' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 09:33:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3756297363' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 09:33:58 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:58 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:58 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:58 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:33:58 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:33:58 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:33:58 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:33:58 compute-0 sudo[94673]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wegttgdnnllsejrfvxxfzztigxipfvjf ; /usr/bin/python3'
Nov 25 09:33:58 compute-0 sudo[94673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:58 compute-0 podman[94685]: 2025-11-25 09:33:58.34940434 +0000 UTC m=+0.027609387 container create 6ea3c0e904847792c486e5c01e03ddf4f6094fddba1018a61621911653f82251 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_ramanujan, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:58 compute-0 systemd[1]: Started libpod-conmon-6ea3c0e904847792c486e5c01e03ddf4f6094fddba1018a61621911653f82251.scope.
Nov 25 09:33:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:58 compute-0 podman[94685]: 2025-11-25 09:33:58.390746212 +0000 UTC m=+0.068951279 container init 6ea3c0e904847792c486e5c01e03ddf4f6094fddba1018a61621911653f82251 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:58 compute-0 podman[94685]: 2025-11-25 09:33:58.396481093 +0000 UTC m=+0.074686140 container start 6ea3c0e904847792c486e5c01e03ddf4f6094fddba1018a61621911653f82251 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_ramanujan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:58 compute-0 podman[94685]: 2025-11-25 09:33:58.398397405 +0000 UTC m=+0.076602452 container attach 6ea3c0e904847792c486e5c01e03ddf4f6094fddba1018a61621911653f82251 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_ramanujan, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:58 compute-0 tender_ramanujan[94698]: 167 167
Nov 25 09:33:58 compute-0 systemd[1]: libpod-6ea3c0e904847792c486e5c01e03ddf4f6094fddba1018a61621911653f82251.scope: Deactivated successfully.
Nov 25 09:33:58 compute-0 conmon[94698]: conmon 6ea3c0e904847792c486 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ea3c0e904847792c486e5c01e03ddf4f6094fddba1018a61621911653f82251.scope/container/memory.events
Nov 25 09:33:58 compute-0 podman[94685]: 2025-11-25 09:33:58.401103436 +0000 UTC m=+0.079308483 container died 6ea3c0e904847792c486e5c01e03ddf4f6094fddba1018a61621911653f82251 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_ramanujan, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:58 compute-0 python3[94682]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4db32eeefadab781c47debae71db4fec78c482e6f49ea57627c626efe4a47136-merged.mount: Deactivated successfully.
Nov 25 09:33:58 compute-0 podman[94685]: 2025-11-25 09:33:58.424428486 +0000 UTC m=+0.102633523 container remove 6ea3c0e904847792c486e5c01e03ddf4f6094fddba1018a61621911653f82251 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_ramanujan, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:33:58 compute-0 podman[94685]: 2025-11-25 09:33:58.337612351 +0000 UTC m=+0.015817418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:58 compute-0 systemd[1]: libpod-conmon-6ea3c0e904847792c486e5c01e03ddf4f6094fddba1018a61621911653f82251.scope: Deactivated successfully.
Nov 25 09:33:58 compute-0 podman[94711]: 2025-11-25 09:33:58.457153515 +0000 UTC m=+0.029144711 container create eb861eddc97165a519460832d12ff3710399bf07bac8ecf871a8d27b8456b5ae (image=quay.io/ceph/ceph:v19, name=crazy_bhaskara, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:33:58 compute-0 systemd[1]: Started libpod-conmon-eb861eddc97165a519460832d12ff3710399bf07bac8ecf871a8d27b8456b5ae.scope.
Nov 25 09:33:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/531d9088d5d307df9dca6e6aff6062bf09f7c8a62d34f29b815ef5aacb3bba9d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/531d9088d5d307df9dca6e6aff6062bf09f7c8a62d34f29b815ef5aacb3bba9d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:58 compute-0 podman[94711]: 2025-11-25 09:33:58.506255706 +0000 UTC m=+0.078246912 container init eb861eddc97165a519460832d12ff3710399bf07bac8ecf871a8d27b8456b5ae (image=quay.io/ceph/ceph:v19, name=crazy_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:33:58 compute-0 podman[94711]: 2025-11-25 09:33:58.510387414 +0000 UTC m=+0.082378599 container start eb861eddc97165a519460832d12ff3710399bf07bac8ecf871a8d27b8456b5ae (image=quay.io/ceph/ceph:v19, name=crazy_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:58 compute-0 podman[94711]: 2025-11-25 09:33:58.511602975 +0000 UTC m=+0.083594171 container attach eb861eddc97165a519460832d12ff3710399bf07bac8ecf871a8d27b8456b5ae (image=quay.io/ceph/ceph:v19, name=crazy_bhaskara, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:33:58 compute-0 podman[94738]: 2025-11-25 09:33:58.542300412 +0000 UTC m=+0.025695719 container create cfde0d9bdb1951971f6280d8767f8e8ab5395d6237689b5b90bba26cb0a7491a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:58 compute-0 podman[94711]: 2025-11-25 09:33:58.446803333 +0000 UTC m=+0.018794529 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:58 compute-0 systemd[1]: Started libpod-conmon-cfde0d9bdb1951971f6280d8767f8e8ab5395d6237689b5b90bba26cb0a7491a.scope.
Nov 25 09:33:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3838a0d2ec250d5691bbb3a1dbe40e806ed72e83f886c45c4d99a09b935aea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3838a0d2ec250d5691bbb3a1dbe40e806ed72e83f886c45c4d99a09b935aea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3838a0d2ec250d5691bbb3a1dbe40e806ed72e83f886c45c4d99a09b935aea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3838a0d2ec250d5691bbb3a1dbe40e806ed72e83f886c45c4d99a09b935aea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3838a0d2ec250d5691bbb3a1dbe40e806ed72e83f886c45c4d99a09b935aea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:58 compute-0 podman[94738]: 2025-11-25 09:33:58.601354272 +0000 UTC m=+0.084749569 container init cfde0d9bdb1951971f6280d8767f8e8ab5395d6237689b5b90bba26cb0a7491a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:33:58 compute-0 podman[94738]: 2025-11-25 09:33:58.607081779 +0000 UTC m=+0.090477086 container start cfde0d9bdb1951971f6280d8767f8e8ab5395d6237689b5b90bba26cb0a7491a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:58 compute-0 podman[94738]: 2025-11-25 09:33:58.608634256 +0000 UTC m=+0.092029553 container attach cfde0d9bdb1951971f6280d8767f8e8ab5395d6237689b5b90bba26cb0a7491a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:33:58 compute-0 podman[94738]: 2025-11-25 09:33:58.531787595 +0000 UTC m=+0.015182912 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:58 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 09:33:58 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.zcfgby(active, since 6s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:58 compute-0 musing_wing[94753]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:33:58 compute-0 musing_wing[94753]: --> All data devices are unavailable
Nov 25 09:33:58 compute-0 crazy_bhaskara[94729]: 
Nov 25 09:33:58 compute-0 crazy_bhaskara[94729]: {"fsid":"af1c9ae3-08d7-5547-a53d-2cccf7c6ef90","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":56,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":41,"num_osds":3,"num_up_osds":3,"osd_up_since":1764063199,"num_in_osds":3,"osd_in_since":1764063185,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":11},{"state_name":"unknown","count":1}],"num_pgs":12,"num_pools":12,"num_objects":194,"data_bytes":464595,"bytes_used":88530944,"bytes_avail":64323395584,"bytes_total":64411926528,"unknown_pgs_ratio":0.083333335816860199},"fsmap":{"epoch":2,"btime":"2025-11-25T09:33:52:871701+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":3,"modified":"2025-11-25T09:33:38.641268+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.zcfgby":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.plffrn":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.flybft":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14412":{"start_epoch":3,"start_stamp":"2025-11-25T09:33:37.895580+0000","gid":14412,"addr":"192.168.122.100:0/370176697","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC 7763 64-Core Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.uosdwi","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7865360","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"6af48147-6aba-44e3-91a3-565a32433f82","zone_name":"default","zonegroup_id":"7f877101-a613-42fa-9374-f143e99606e2","zonegroup_name":"default"},"task_status":{}},"24152":{"start_epoch":3,"start_stamp":"2025-11-25T09:33:37.825710+0000","gid":24152,"addr":"192.168.122.101:0/1293368742","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC 7763 64-Core Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.lyczeh","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7865372","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"6af48147-6aba-44e3-91a3-565a32433f82","zone_name":"default","zonegroup_id":"7f877101-a613-42fa-9374-f143e99606e2","zonegroup_name":"default"},"task_status":{}},"24163":{"start_epoch":3,"start_stamp":"2025-11-25T09:33:37.826789+0000","gid":24163,"addr":"192.168.122.102:0/1045634058","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC 7763 64-Core Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.oidoiv","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7865372","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"6af48147-6aba-44e3-91a3-565a32433f82","zone_name":"default","zonegroup_id":"7f877101-a613-42fa-9374-f143e99606e2","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"a118cadf-4c03-45ee-bad8-9d1945094331":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true},"b4c56abf-0978-4d7d-b824-4afbd48fa7ef":{"message":"Updating node-exporter deployment (+1 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 25 09:33:58 compute-0 systemd[1]: libpod-eb861eddc97165a519460832d12ff3710399bf07bac8ecf871a8d27b8456b5ae.scope: Deactivated successfully.
Nov 25 09:33:58 compute-0 conmon[94729]: conmon eb861eddc97165a51946 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb861eddc97165a519460832d12ff3710399bf07bac8ecf871a8d27b8456b5ae.scope/container/memory.events
Nov 25 09:33:58 compute-0 systemd[1]: libpod-cfde0d9bdb1951971f6280d8767f8e8ab5395d6237689b5b90bba26cb0a7491a.scope: Deactivated successfully.
Nov 25 09:33:58 compute-0 podman[94789]: 2025-11-25 09:33:58.904750735 +0000 UTC m=+0.019370395 container died cfde0d9bdb1951971f6280d8767f8e8ab5395d6237689b5b90bba26cb0a7491a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:33:58 compute-0 podman[94790]: 2025-11-25 09:33:58.913294971 +0000 UTC m=+0.028248972 container died eb861eddc97165a519460832d12ff3710399bf07bac8ecf871a8d27b8456b5ae (image=quay.io/ceph/ceph:v19, name=crazy_bhaskara, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b3838a0d2ec250d5691bbb3a1dbe40e806ed72e83f886c45c4d99a09b935aea-merged.mount: Deactivated successfully.
Nov 25 09:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-531d9088d5d307df9dca6e6aff6062bf09f7c8a62d34f29b815ef5aacb3bba9d-merged.mount: Deactivated successfully.
Nov 25 09:33:58 compute-0 podman[94789]: 2025-11-25 09:33:58.926454438 +0000 UTC m=+0.041074078 container remove cfde0d9bdb1951971f6280d8767f8e8ab5395d6237689b5b90bba26cb0a7491a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wing, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:58 compute-0 systemd[1]: libpod-conmon-cfde0d9bdb1951971f6280d8767f8e8ab5395d6237689b5b90bba26cb0a7491a.scope: Deactivated successfully.
Nov 25 09:33:58 compute-0 podman[94790]: 2025-11-25 09:33:58.935348905 +0000 UTC m=+0.050302896 container remove eb861eddc97165a519460832d12ff3710399bf07bac8ecf871a8d27b8456b5ae (image=quay.io/ceph/ceph:v19, name=crazy_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:58 compute-0 systemd[1]: libpod-conmon-eb861eddc97165a519460832d12ff3710399bf07bac8ecf871a8d27b8456b5ae.scope: Deactivated successfully.
Nov 25 09:33:58 compute-0 sudo[94673]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:58 compute-0 sudo[94602]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:59 compute-0 sudo[94812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:59 compute-0 sudo[94812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:59 compute-0 sudo[94812]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:59 compute-0 sudo[94837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:33:59 compute-0 sudo[94837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:59 compute-0 sudo[94883]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebjemqgzcuolfkhqpyhfhytcrmcnnhcl ; /usr/bin/python3'
Nov 25 09:33:59 compute-0 sudo[94883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:59 compute-0 python3[94887]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:59 compute-0 podman[94888]: 2025-11-25 09:33:59.20950699 +0000 UTC m=+0.027374854 container create 51f084247e52a71403dc67cac6dce6c5ecb829478ae4ed7494ac79cb76fa06ed (image=quay.io/ceph/ceph:v19, name=flamboyant_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:59 compute-0 systemd[1]: Started libpod-conmon-51f084247e52a71403dc67cac6dce6c5ecb829478ae4ed7494ac79cb76fa06ed.scope.
Nov 25 09:33:59 compute-0 ceph-mon[74207]: pgmap v10: 12 pgs: 1 unknown, 11 active+clean; 454 KiB data, 84 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:33:59 compute-0 ceph-mon[74207]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 09:33:59 compute-0 ceph-mon[74207]: mgrmap e24: compute-0.zcfgby(active, since 6s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:33:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3644062899' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:33:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89353679bcd01ef801f603c16e95c6e2328ad83dea2a9b86d48b692a6bef9d20/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89353679bcd01ef801f603c16e95c6e2328ad83dea2a9b86d48b692a6bef9d20/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:59 compute-0 podman[94888]: 2025-11-25 09:33:59.25737273 +0000 UTC m=+0.075240614 container init 51f084247e52a71403dc67cac6dce6c5ecb829478ae4ed7494ac79cb76fa06ed (image=quay.io/ceph/ceph:v19, name=flamboyant_banzai, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 25 09:33:59 compute-0 podman[94888]: 2025-11-25 09:33:59.267941564 +0000 UTC m=+0.085809438 container start 51f084247e52a71403dc67cac6dce6c5ecb829478ae4ed7494ac79cb76fa06ed (image=quay.io/ceph/ceph:v19, name=flamboyant_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:33:59 compute-0 podman[94888]: 2025-11-25 09:33:59.275257705 +0000 UTC m=+0.093125579 container attach 51f084247e52a71403dc67cac6dce6c5ecb829478ae4ed7494ac79cb76fa06ed (image=quay.io/ceph/ceph:v19, name=flamboyant_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 09:33:59 compute-0 podman[94888]: 2025-11-25 09:33:59.198598797 +0000 UTC m=+0.016466682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:33:59 compute-0 podman[94936]: 2025-11-25 09:33:59.356423347 +0000 UTC m=+0.031974505 container create de7a5f601dab3df694e7b71ce981ebc54fbddaeaf683de6769a9125615ae33f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:59 compute-0 systemd[1]: Started libpod-conmon-de7a5f601dab3df694e7b71ce981ebc54fbddaeaf683de6769a9125615ae33f9.scope.
Nov 25 09:33:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:59 compute-0 podman[94936]: 2025-11-25 09:33:59.39395633 +0000 UTC m=+0.069507488 container init de7a5f601dab3df694e7b71ce981ebc54fbddaeaf683de6769a9125615ae33f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_feynman, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:33:59 compute-0 podman[94936]: 2025-11-25 09:33:59.398164202 +0000 UTC m=+0.073715360 container start de7a5f601dab3df694e7b71ce981ebc54fbddaeaf683de6769a9125615ae33f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_feynman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 25 09:33:59 compute-0 podman[94936]: 2025-11-25 09:33:59.39938347 +0000 UTC m=+0.074934629 container attach de7a5f601dab3df694e7b71ce981ebc54fbddaeaf683de6769a9125615ae33f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:33:59 compute-0 vigilant_feynman[94967]: 167 167
Nov 25 09:33:59 compute-0 systemd[1]: libpod-de7a5f601dab3df694e7b71ce981ebc54fbddaeaf683de6769a9125615ae33f9.scope: Deactivated successfully.
Nov 25 09:33:59 compute-0 podman[94936]: 2025-11-25 09:33:59.401816476 +0000 UTC m=+0.077367634 container died de7a5f601dab3df694e7b71ce981ebc54fbddaeaf683de6769a9125615ae33f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:59 compute-0 podman[94936]: 2025-11-25 09:33:59.416660608 +0000 UTC m=+0.092211766 container remove de7a5f601dab3df694e7b71ce981ebc54fbddaeaf683de6769a9125615ae33f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_feynman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:33:59 compute-0 podman[94936]: 2025-11-25 09:33:59.344373272 +0000 UTC m=+0.019924451 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:59 compute-0 systemd[1]: libpod-conmon-de7a5f601dab3df694e7b71ce981ebc54fbddaeaf683de6769a9125615ae33f9.scope: Deactivated successfully.
Nov 25 09:33:59 compute-0 podman[94989]: 2025-11-25 09:33:59.524966341 +0000 UTC m=+0.027647308 container create e04adaca0c0aea415daf6b33e24e56c596e0f6643e9d2a081c7fc6c16a260af6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_bose, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:33:59 compute-0 systemd[1]: Started libpod-conmon-e04adaca0c0aea415daf6b33e24e56c596e0f6643e9d2a081c7fc6c16a260af6.scope.
Nov 25 09:33:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:33:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654c9037949b46a748d80d39b8ea550b13e50f7bc966be347aa908758888c0d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654c9037949b46a748d80d39b8ea550b13e50f7bc966be347aa908758888c0d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654c9037949b46a748d80d39b8ea550b13e50f7bc966be347aa908758888c0d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654c9037949b46a748d80d39b8ea550b13e50f7bc966be347aa908758888c0d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:33:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:33:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2787207747' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:33:59 compute-0 flamboyant_banzai[94911]: 
Nov 25 09:33:59 compute-0 flamboyant_banzai[94911]: {"epoch":3,"fsid":"af1c9ae3-08d7-5547-a53d-2cccf7c6ef90","modified":"2025-11-25T09:32:57.086503Z","created":"2025-11-25T09:31:14.695764Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Nov 25 09:33:59 compute-0 flamboyant_banzai[94911]: dumped monmap epoch 3
Nov 25 09:33:59 compute-0 podman[94989]: 2025-11-25 09:33:59.582577182 +0000 UTC m=+0.085258150 container init e04adaca0c0aea415daf6b33e24e56c596e0f6643e9d2a081c7fc6c16a260af6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_bose, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:33:59 compute-0 podman[94989]: 2025-11-25 09:33:59.587202311 +0000 UTC m=+0.089883278 container start e04adaca0c0aea415daf6b33e24e56c596e0f6643e9d2a081c7fc6c16a260af6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:33:59 compute-0 podman[94989]: 2025-11-25 09:33:59.591922519 +0000 UTC m=+0.094603476 container attach e04adaca0c0aea415daf6b33e24e56c596e0f6643e9d2a081c7fc6c16a260af6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 09:33:59 compute-0 systemd[1]: libpod-51f084247e52a71403dc67cac6dce6c5ecb829478ae4ed7494ac79cb76fa06ed.scope: Deactivated successfully.
Nov 25 09:33:59 compute-0 podman[94989]: 2025-11-25 09:33:59.513325227 +0000 UTC m=+0.016006195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:33:59 compute-0 podman[95009]: 2025-11-25 09:33:59.619346856 +0000 UTC m=+0.016189891 container died 51f084247e52a71403dc67cac6dce6c5ecb829478ae4ed7494ac79cb76fa06ed (image=quay.io/ceph/ceph:v19, name=flamboyant_banzai, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:33:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-89353679bcd01ef801f603c16e95c6e2328ad83dea2a9b86d48b692a6bef9d20-merged.mount: Deactivated successfully.
Nov 25 09:33:59 compute-0 podman[95009]: 2025-11-25 09:33:59.638355117 +0000 UTC m=+0.035198132 container remove 51f084247e52a71403dc67cac6dce6c5ecb829478ae4ed7494ac79cb76fa06ed (image=quay.io/ceph/ceph:v19, name=flamboyant_banzai, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:33:59 compute-0 systemd[1]: libpod-conmon-51f084247e52a71403dc67cac6dce6c5ecb829478ae4ed7494ac79cb76fa06ed.scope: Deactivated successfully.
Nov 25 09:33:59 compute-0 sudo[94883]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:59 compute-0 quirky_bose[95002]: {
Nov 25 09:33:59 compute-0 quirky_bose[95002]:     "1": [
Nov 25 09:33:59 compute-0 quirky_bose[95002]:         {
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             "devices": [
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "/dev/loop3"
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             ],
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             "lv_name": "ceph_lv0",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             "lv_size": "21470642176",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             "name": "ceph_lv0",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             "tags": {
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.cluster_name": "ceph",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.crush_device_class": "",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.encrypted": "0",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.osd_id": "1",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.type": "block",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.vdo": "0",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:                 "ceph.with_tpm": "0"
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             },
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             "type": "block",
Nov 25 09:33:59 compute-0 quirky_bose[95002]:             "vg_name": "ceph_vg0"
Nov 25 09:33:59 compute-0 quirky_bose[95002]:         }
Nov 25 09:33:59 compute-0 quirky_bose[95002]:     ]
Nov 25 09:33:59 compute-0 quirky_bose[95002]: }
Nov 25 09:33:59 compute-0 systemd[1]: libpod-e04adaca0c0aea415daf6b33e24e56c596e0f6643e9d2a081c7fc6c16a260af6.scope: Deactivated successfully.
Nov 25 09:33:59 compute-0 podman[94989]: 2025-11-25 09:33:59.81691268 +0000 UTC m=+0.319593647 container died e04adaca0c0aea415daf6b33e24e56c596e0f6643e9d2a081c7fc6c16a260af6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_bose, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:33:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-654c9037949b46a748d80d39b8ea550b13e50f7bc966be347aa908758888c0d6-merged.mount: Deactivated successfully.
Nov 25 09:33:59 compute-0 podman[94989]: 2025-11-25 09:33:59.840460749 +0000 UTC m=+0.343141717 container remove e04adaca0c0aea415daf6b33e24e56c596e0f6643e9d2a081c7fc6c16a260af6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_bose, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:33:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v11: 12 pgs: 12 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Nov 25 09:33:59 compute-0 systemd[1]: libpod-conmon-e04adaca0c0aea415daf6b33e24e56c596e0f6643e9d2a081c7fc6c16a260af6.scope: Deactivated successfully.
Nov 25 09:33:59 compute-0 sudo[94837]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:59 compute-0 sudo[95035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:33:59 compute-0 sudo[95035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:59 compute-0 sudo[95035]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:59 compute-0 sudo[95060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:33:59 compute-0 sudo[95060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:33:59 compute-0 sudo[95108]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcpyswqjjvxvmfdaixdhdzwmpqtibuqb ; /usr/bin/python3'
Nov 25 09:33:59 compute-0 sudo[95108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:00 compute-0 python3[95110]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:00 compute-0 podman[95111]: 2025-11-25 09:34:00.137399559 +0000 UTC m=+0.029979825 container create e1c565d315c24386def802e2f199ceaebe94212f5527fe5404bead5916ae5b70 (image=quay.io/ceph/ceph:v19, name=sleepy_sammet, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:34:00 compute-0 systemd[1]: Started libpod-conmon-e1c565d315c24386def802e2f199ceaebe94212f5527fe5404bead5916ae5b70.scope.
Nov 25 09:34:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae8ea0b5d6a65cbdf00adbf23ac5c6f0faa21440e6540b0019e42bd7c09598c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae8ea0b5d6a65cbdf00adbf23ac5c6f0faa21440e6540b0019e42bd7c09598c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:00 compute-0 podman[95111]: 2025-11-25 09:34:00.191060313 +0000 UTC m=+0.083640589 container init e1c565d315c24386def802e2f199ceaebe94212f5527fe5404bead5916ae5b70 (image=quay.io/ceph/ceph:v19, name=sleepy_sammet, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:34:00 compute-0 podman[95111]: 2025-11-25 09:34:00.196393967 +0000 UTC m=+0.088974233 container start e1c565d315c24386def802e2f199ceaebe94212f5527fe5404bead5916ae5b70 (image=quay.io/ceph/ceph:v19, name=sleepy_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:34:00 compute-0 podman[95111]: 2025-11-25 09:34:00.197874869 +0000 UTC m=+0.090455145 container attach e1c565d315c24386def802e2f199ceaebe94212f5527fe5404bead5916ae5b70 (image=quay.io/ceph/ceph:v19, name=sleepy_sammet, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:34:00 compute-0 podman[95111]: 2025-11-25 09:34:00.124716521 +0000 UTC m=+0.017296807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:34:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2787207747' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:34:00 compute-0 podman[95158]: 2025-11-25 09:34:00.286503701 +0000 UTC m=+0.028103959 container create 9a08b73c9e01aae1b1bb8d89a16ba6e32f89833ac197e95469ed75fe3e833f11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_mclaren, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:34:00 compute-0 systemd[1]: Started libpod-conmon-9a08b73c9e01aae1b1bb8d89a16ba6e32f89833ac197e95469ed75fe3e833f11.scope.
Nov 25 09:34:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:00 compute-0 podman[95158]: 2025-11-25 09:34:00.332947359 +0000 UTC m=+0.074547618 container init 9a08b73c9e01aae1b1bb8d89a16ba6e32f89833ac197e95469ed75fe3e833f11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_mclaren, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:34:00 compute-0 podman[95158]: 2025-11-25 09:34:00.336956367 +0000 UTC m=+0.078556625 container start 9a08b73c9e01aae1b1bb8d89a16ba6e32f89833ac197e95469ed75fe3e833f11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:34:00 compute-0 podman[95158]: 2025-11-25 09:34:00.338106725 +0000 UTC m=+0.079706984 container attach 9a08b73c9e01aae1b1bb8d89a16ba6e32f89833ac197e95469ed75fe3e833f11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:34:00 compute-0 fervent_mclaren[95189]: 167 167
Nov 25 09:34:00 compute-0 systemd[1]: libpod-9a08b73c9e01aae1b1bb8d89a16ba6e32f89833ac197e95469ed75fe3e833f11.scope: Deactivated successfully.
Nov 25 09:34:00 compute-0 podman[95158]: 2025-11-25 09:34:00.33973824 +0000 UTC m=+0.081338499 container died 9a08b73c9e01aae1b1bb8d89a16ba6e32f89833ac197e95469ed75fe3e833f11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_mclaren, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:00 compute-0 podman[95158]: 2025-11-25 09:34:00.358359513 +0000 UTC m=+0.099959772 container remove 9a08b73c9e01aae1b1bb8d89a16ba6e32f89833ac197e95469ed75fe3e833f11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_mclaren, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:34:00 compute-0 podman[95158]: 2025-11-25 09:34:00.273734819 +0000 UTC m=+0.015335079 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:00 compute-0 systemd[1]: libpod-conmon-9a08b73c9e01aae1b1bb8d89a16ba6e32f89833ac197e95469ed75fe3e833f11.scope: Deactivated successfully.
Nov 25 09:34:00 compute-0 podman[95211]: 2025-11-25 09:34:00.474800943 +0000 UTC m=+0.028057954 container create 935658f5ff88fee40476486e11e3ee38752e7e91fc5239c8475da65d6f15ff57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:34:00 compute-0 systemd[1]: Started libpod-conmon-935658f5ff88fee40476486e11e3ee38752e7e91fc5239c8475da65d6f15ff57.scope.
Nov 25 09:34:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:00 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Nov 25 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e955d10ee14f5e5d421c1eae954ae7c43d6d25857cff6fc94ee82bd74fc7a49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e955d10ee14f5e5d421c1eae954ae7c43d6d25857cff6fc94ee82bd74fc7a49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:00 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1817509438' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e955d10ee14f5e5d421c1eae954ae7c43d6d25857cff6fc94ee82bd74fc7a49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:00 compute-0 sleepy_sammet[95143]: [client.openstack]
Nov 25 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e955d10ee14f5e5d421c1eae954ae7c43d6d25857cff6fc94ee82bd74fc7a49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:00 compute-0 sleepy_sammet[95143]:         key = AQBHdyVpAAAAABAACuXVpdObkUXtdSdlcr1vHw==
Nov 25 09:34:00 compute-0 sleepy_sammet[95143]:         caps mgr = "allow *"
Nov 25 09:34:00 compute-0 sleepy_sammet[95143]:         caps mon = "profile rbd"
Nov 25 09:34:00 compute-0 sleepy_sammet[95143]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 25 09:34:00 compute-0 podman[95211]: 2025-11-25 09:34:00.524880897 +0000 UTC m=+0.078137917 container init 935658f5ff88fee40476486e11e3ee38752e7e91fc5239c8475da65d6f15ff57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_stonebraker, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:34:00 compute-0 podman[95211]: 2025-11-25 09:34:00.530090227 +0000 UTC m=+0.083347227 container start 935658f5ff88fee40476486e11e3ee38752e7e91fc5239c8475da65d6f15ff57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:00 compute-0 podman[95211]: 2025-11-25 09:34:00.531429762 +0000 UTC m=+0.084686772 container attach 935658f5ff88fee40476486e11e3ee38752e7e91fc5239c8475da65d6f15ff57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:00 compute-0 systemd[1]: libpod-e1c565d315c24386def802e2f199ceaebe94212f5527fe5404bead5916ae5b70.scope: Deactivated successfully.
Nov 25 09:34:00 compute-0 podman[95111]: 2025-11-25 09:34:00.546392618 +0000 UTC m=+0.438972884 container died e1c565d315c24386def802e2f199ceaebe94212f5527fe5404bead5916ae5b70 (image=quay.io/ceph/ceph:v19, name=sleepy_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 09:34:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-aae8ea0b5d6a65cbdf00adbf23ac5c6f0faa21440e6540b0019e42bd7c09598c-merged.mount: Deactivated successfully.
Nov 25 09:34:00 compute-0 podman[95211]: 2025-11-25 09:34:00.463286648 +0000 UTC m=+0.016543668 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:00 compute-0 podman[95111]: 2025-11-25 09:34:00.563648226 +0000 UTC m=+0.456228493 container remove e1c565d315c24386def802e2f199ceaebe94212f5527fe5404bead5916ae5b70 (image=quay.io/ceph/ceph:v19, name=sleepy_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:34:00 compute-0 sudo[95108]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:00 compute-0 systemd[1]: libpod-conmon-e1c565d315c24386def802e2f199ceaebe94212f5527fe5404bead5916ae5b70.scope: Deactivated successfully.
Nov 25 09:34:01 compute-0 lvm[95312]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:34:01 compute-0 lvm[95312]: VG ceph_vg0 finished
Nov 25 09:34:01 compute-0 focused_stonebraker[95224]: {}
Nov 25 09:34:01 compute-0 podman[95211]: 2025-11-25 09:34:01.046835907 +0000 UTC m=+0.600092957 container died 935658f5ff88fee40476486e11e3ee38752e7e91fc5239c8475da65d6f15ff57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 09:34:01 compute-0 systemd[1]: libpod-935658f5ff88fee40476486e11e3ee38752e7e91fc5239c8475da65d6f15ff57.scope: Deactivated successfully.
Nov 25 09:34:01 compute-0 lvm[95313]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:34:01 compute-0 lvm[95313]: VG ceph_vg0 finished
Nov 25 09:34:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e955d10ee14f5e5d421c1eae954ae7c43d6d25857cff6fc94ee82bd74fc7a49-merged.mount: Deactivated successfully.
Nov 25 09:34:01 compute-0 podman[95211]: 2025-11-25 09:34:01.073995103 +0000 UTC m=+0.627252104 container remove 935658f5ff88fee40476486e11e3ee38752e7e91fc5239c8475da65d6f15ff57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 25 09:34:01 compute-0 systemd[1]: libpod-conmon-935658f5ff88fee40476486e11e3ee38752e7e91fc5239c8475da65d6f15ff57.scope: Deactivated successfully.
Nov 25 09:34:01 compute-0 sudo[95060]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:34:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:34:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:01 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 09527637-e393-449e-a08c-4693b99d6230 (Updating mds.cephfs deployment (+3 -> 3))
Nov 25 09:34:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pwazzx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 25 09:34:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pwazzx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 09:34:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pwazzx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 09:34:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:34:01 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:01 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.pwazzx on compute-2
Nov 25 09:34:01 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.pwazzx on compute-2
Nov 25 09:34:01 compute-0 ceph-mon[74207]: pgmap v11: 12 pgs: 12 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Nov 25 09:34:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1817509438' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 09:34:01 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:01 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:01 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pwazzx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 09:34:01 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pwazzx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 09:34:01 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:01 compute-0 sudo[95472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfibqstsqpjejyktgjafggfsbzzsfptk ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063241.3209655-37817-272928007086464/async_wrapper.py j788888756668 30 /home/zuul/.ansible/tmp/ansible-tmp-1764063241.3209655-37817-272928007086464/AnsiballZ_command.py _'
Nov 25 09:34:01 compute-0 sudo[95472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:01 compute-0 ansible-async_wrapper.py[95474]: Invoked with j788888756668 30 /home/zuul/.ansible/tmp/ansible-tmp-1764063241.3209655-37817-272928007086464/AnsiballZ_command.py _
Nov 25 09:34:01 compute-0 ansible-async_wrapper.py[95477]: Starting module and watcher
Nov 25 09:34:01 compute-0 ansible-async_wrapper.py[95477]: Start watching 95478 (30)
Nov 25 09:34:01 compute-0 ansible-async_wrapper.py[95478]: Start module (95478)
Nov 25 09:34:01 compute-0 ansible-async_wrapper.py[95474]: Return async_wrapper task started.
Nov 25 09:34:01 compute-0 sudo[95472]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:01 compute-0 python3[95479]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:01 compute-0 podman[95480]: 2025-11-25 09:34:01.803863543 +0000 UTC m=+0.027481416 container create 3bbaee6183bbe9613ba1dd459838ce86c382c3de67cfe39f94cba39f47fa9df6 (image=quay.io/ceph/ceph:v19, name=mystifying_brattain, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:34:01 compute-0 systemd[1]: Started libpod-conmon-3bbaee6183bbe9613ba1dd459838ce86c382c3de67cfe39f94cba39f47fa9df6.scope.
Nov 25 09:34:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v12: 12 pgs: 12 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Nov 25 09:34:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55a81ded0a4e454105bee7dea9b6cd52e69848b5e2d8f6da908a5889a7e11b0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55a81ded0a4e454105bee7dea9b6cd52e69848b5e2d8f6da908a5889a7e11b0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:01 compute-0 podman[95480]: 2025-11-25 09:34:01.866728048 +0000 UTC m=+0.090345909 container init 3bbaee6183bbe9613ba1dd459838ce86c382c3de67cfe39f94cba39f47fa9df6 (image=quay.io/ceph/ceph:v19, name=mystifying_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 25 09:34:01 compute-0 podman[95480]: 2025-11-25 09:34:01.871181362 +0000 UTC m=+0.094799223 container start 3bbaee6183bbe9613ba1dd459838ce86c382c3de67cfe39f94cba39f47fa9df6 (image=quay.io/ceph/ceph:v19, name=mystifying_brattain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:34:01 compute-0 podman[95480]: 2025-11-25 09:34:01.872388137 +0000 UTC m=+0.096006000 container attach 3bbaee6183bbe9613ba1dd459838ce86c382c3de67cfe39f94cba39f47fa9df6 (image=quay.io/ceph/ceph:v19, name=mystifying_brattain, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 25 09:34:01 compute-0 podman[95480]: 2025-11-25 09:34:01.793378719 +0000 UTC m=+0.016996601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:34:01 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 6 completed events
Nov 25 09:34:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:34:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:01 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event a118cadf-4c03-45ee-bad8-9d1945094331 (Global Recovery Event) in 5 seconds
Nov 25 09:34:02 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14598 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:34:02 compute-0 mystifying_brattain[95492]: 
Nov 25 09:34:02 compute-0 mystifying_brattain[95492]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 09:34:02 compute-0 systemd[1]: libpod-3bbaee6183bbe9613ba1dd459838ce86c382c3de67cfe39f94cba39f47fa9df6.scope: Deactivated successfully.
Nov 25 09:34:02 compute-0 conmon[95492]: conmon 3bbaee6183bbe9613ba1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bbaee6183bbe9613ba1dd459838ce86c382c3de67cfe39f94cba39f47fa9df6.scope/container/memory.events
Nov 25 09:34:02 compute-0 podman[95480]: 2025-11-25 09:34:02.157798695 +0000 UTC m=+0.381416557 container died 3bbaee6183bbe9613ba1dd459838ce86c382c3de67cfe39f94cba39f47fa9df6 (image=quay.io/ceph/ceph:v19, name=mystifying_brattain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a55a81ded0a4e454105bee7dea9b6cd52e69848b5e2d8f6da908a5889a7e11b0-merged.mount: Deactivated successfully.
Nov 25 09:34:02 compute-0 podman[95480]: 2025-11-25 09:34:02.177862116 +0000 UTC m=+0.401479977 container remove 3bbaee6183bbe9613ba1dd459838ce86c382c3de67cfe39f94cba39f47fa9df6 (image=quay.io/ceph/ceph:v19, name=mystifying_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:34:02 compute-0 systemd[1]: libpod-conmon-3bbaee6183bbe9613ba1dd459838ce86c382c3de67cfe39f94cba39f47fa9df6.scope: Deactivated successfully.
Nov 25 09:34:02 compute-0 ansible-async_wrapper.py[95478]: Module complete (95478)
Nov 25 09:34:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:34:02 compute-0 ceph-mon[74207]: Deploying daemon mds.cephfs.compute-2.pwazzx on compute-2
Nov 25 09:34:02 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjveyw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjveyw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjveyw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 09:34:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:02 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.wjveyw on compute-0
Nov 25 09:34:02 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.wjveyw on compute-0
Nov 25 09:34:02 compute-0 sudo[95527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:02 compute-0 sudo[95527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:02 compute-0 sudo[95527]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:02 compute-0 sudo[95552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:02 compute-0 sudo[95552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] as mds.0
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.pwazzx assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 09:34:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e3 new map
Nov 25 09:34:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-11-25T09:34:02:633817+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        3
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:34:02.633809+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14601}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.pwazzx{0:14601} state up:creating seq 1 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Nov 25 09:34:02 compute-0 podman[95612]: 2025-11-25 09:34:02.639704349 +0000 UTC m=+0.027110396 container create d949ac8e8cd2ec3cf3055ecf3192d14bd9ac604c4463a5dd84f64ddfc15c0575 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_colden, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] up:boot
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:creating}
Nov 25 09:34:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.pwazzx"} v 0)
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.pwazzx"}]: dispatch
Nov 25 09:34:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e3 all = 0
Nov 25 09:34:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.pwazzx is now active in filesystem cephfs as rank 0
Nov 25 09:34:02 compute-0 systemd[1]: Started libpod-conmon-d949ac8e8cd2ec3cf3055ecf3192d14bd9ac604c4463a5dd84f64ddfc15c0575.scope.
Nov 25 09:34:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:02 compute-0 podman[95612]: 2025-11-25 09:34:02.688658 +0000 UTC m=+0.076064066 container init d949ac8e8cd2ec3cf3055ecf3192d14bd9ac604c4463a5dd84f64ddfc15c0575 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_colden, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 09:34:02 compute-0 podman[95612]: 2025-11-25 09:34:02.693012558 +0000 UTC m=+0.080418605 container start d949ac8e8cd2ec3cf3055ecf3192d14bd9ac604c4463a5dd84f64ddfc15c0575 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:02 compute-0 loving_colden[95625]: 167 167
Nov 25 09:34:02 compute-0 podman[95612]: 2025-11-25 09:34:02.69519886 +0000 UTC m=+0.082604907 container attach d949ac8e8cd2ec3cf3055ecf3192d14bd9ac604c4463a5dd84f64ddfc15c0575 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_colden, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:34:02 compute-0 systemd[1]: libpod-d949ac8e8cd2ec3cf3055ecf3192d14bd9ac604c4463a5dd84f64ddfc15c0575.scope: Deactivated successfully.
Nov 25 09:34:02 compute-0 podman[95612]: 2025-11-25 09:34:02.696125586 +0000 UTC m=+0.083531634 container died d949ac8e8cd2ec3cf3055ecf3192d14bd9ac604c4463a5dd84f64ddfc15c0575 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_colden, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-00c1347048ce3ca77ed35d1ff2c7925500a0981353fbc703d3b4e253c00958eb-merged.mount: Deactivated successfully.
Nov 25 09:34:02 compute-0 podman[95612]: 2025-11-25 09:34:02.712797694 +0000 UTC m=+0.100203741 container remove d949ac8e8cd2ec3cf3055ecf3192d14bd9ac604c4463a5dd84f64ddfc15c0575 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:34:02 compute-0 podman[95612]: 2025-11-25 09:34:02.627673349 +0000 UTC m=+0.015079416 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:02 compute-0 systemd[1]: Reloading.
Nov 25 09:34:02 compute-0 systemd-sysv-generator[95713]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:02 compute-0 systemd-rc-local-generator[95710]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:02 compute-0 systemd[1]: libpod-conmon-d949ac8e8cd2ec3cf3055ecf3192d14bd9ac604c4463a5dd84f64ddfc15c0575.scope: Deactivated successfully.
Nov 25 09:34:02 compute-0 sudo[95687]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttqhubvmlrtlbksjkirmxhhqmszimkgp ; /usr/bin/python3'
Nov 25 09:34:02 compute-0 sudo[95687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:02 compute-0 systemd[1]: Reloading.
Nov 25 09:34:03 compute-0 systemd-rc-local-generator[95751]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:03 compute-0 systemd-sysv-generator[95758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:03 compute-0 python3[95725]: ansible-ansible.legacy.async_status Invoked with jid=j788888756668.95474 mode=status _async_dir=/root/.ansible_async
Nov 25 09:34:03 compute-0 sudo[95687]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:03 compute-0 sudo[95809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unglovccokutoegwgdcuqnrubjgnfbwc ; /usr/bin/python3'
Nov 25 09:34:03 compute-0 sudo[95809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:03 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.wjveyw for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:34:03 compute-0 python3[95813]: ansible-ansible.legacy.async_status Invoked with jid=j788888756668.95474 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 09:34:03 compute-0 sudo[95809]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:03 compute-0 ceph-mon[74207]: pgmap v12: 12 pgs: 12 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Nov 25 09:34:03 compute-0 ceph-mon[74207]: from='client.14598 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:34:03 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:03 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:03 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:03 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjveyw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 09:34:03 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjveyw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 09:34:03 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:03 compute-0 ceph-mon[74207]: Deploying daemon mds.cephfs.compute-0.wjveyw on compute-0
Nov 25 09:34:03 compute-0 ceph-mon[74207]: daemon mds.cephfs.compute-2.pwazzx assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 25 09:34:03 compute-0 ceph-mon[74207]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 25 09:34:03 compute-0 ceph-mon[74207]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 25 09:34:03 compute-0 ceph-mon[74207]: Cluster is now healthy
Nov 25 09:34:03 compute-0 ceph-mon[74207]: mds.? [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] up:boot
Nov 25 09:34:03 compute-0 ceph-mon[74207]: fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:creating}
Nov 25 09:34:03 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.pwazzx"}]: dispatch
Nov 25 09:34:03 compute-0 ceph-mon[74207]: daemon mds.cephfs.compute-2.pwazzx is now active in filesystem cephfs as rank 0
Nov 25 09:34:03 compute-0 podman[95853]: 2025-11-25 09:34:03.32552145 +0000 UTC m=+0.026268578 container create 0f65602d9c154e7f86abd49511bfdcbcfd7e8ab2aa5738789ec5367e3d51b7bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-0-wjveyw, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad6cf8481629eb151771b21b9fda536443cc23558d13a1d3cc6911703066a3e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad6cf8481629eb151771b21b9fda536443cc23558d13a1d3cc6911703066a3e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad6cf8481629eb151771b21b9fda536443cc23558d13a1d3cc6911703066a3e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad6cf8481629eb151771b21b9fda536443cc23558d13a1d3cc6911703066a3e2/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.wjveyw supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:03 compute-0 podman[95853]: 2025-11-25 09:34:03.365907048 +0000 UTC m=+0.066654178 container init 0f65602d9c154e7f86abd49511bfdcbcfd7e8ab2aa5738789ec5367e3d51b7bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-0-wjveyw, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:34:03 compute-0 podman[95853]: 2025-11-25 09:34:03.371561186 +0000 UTC m=+0.072308315 container start 0f65602d9c154e7f86abd49511bfdcbcfd7e8ab2aa5738789ec5367e3d51b7bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-0-wjveyw, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 25 09:34:03 compute-0 bash[95853]: 0f65602d9c154e7f86abd49511bfdcbcfd7e8ab2aa5738789ec5367e3d51b7bc
Nov 25 09:34:03 compute-0 podman[95853]: 2025-11-25 09:34:03.315203669 +0000 UTC m=+0.015950819 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:03 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.wjveyw for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:34:03 compute-0 ceph-mds[95869]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 09:34:03 compute-0 ceph-mds[95869]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Nov 25 09:34:03 compute-0 ceph-mds[95869]: main not setting numa affinity
Nov 25 09:34:03 compute-0 ceph-mds[95869]: pidfile_write: ignore empty --pid-file
Nov 25 09:34:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-0-wjveyw[95865]: starting mds.cephfs.compute-0.wjveyw at 
Nov 25 09:34:03 compute-0 sudo[95552]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:34:03 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Updating MDS map to version 3 from mon.1
Nov 25 09:34:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:34:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 25 09:34:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.knpqas", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 25 09:34:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.knpqas", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 09:34:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.knpqas", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 09:34:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:34:03 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:03 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.knpqas on compute-1
Nov 25 09:34:03 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.knpqas on compute-1
Nov 25 09:34:03 compute-0 sudo[95911]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xznigzdbftadgpcajiqiuodirenktuvv ; /usr/bin/python3'
Nov 25 09:34:03 compute-0 sudo[95911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e4 new map
Nov 25 09:34:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-11-25T09:34:03:638492+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:34:03.638490+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14601}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14601 members: 14601
                                           [mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 2 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 09:34:03 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Updating MDS map to version 4 from mon.1
Nov 25 09:34:03 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Monitors have assigned me to become a standby
Nov 25 09:34:03 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] up:active
Nov 25 09:34:03 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] up:boot
Nov 25 09:34:03 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 1 up:standby
Nov 25 09:34:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.wjveyw"} v 0)
Nov 25 09:34:03 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wjveyw"}]: dispatch
Nov 25 09:34:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e4 all = 0
Nov 25 09:34:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e5 new map
Nov 25 09:34:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-11-25T09:34:03:644218+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:34:03.638490+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14601}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 14601 members: 14601
                                           [mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 2 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 09:34:03 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 1 up:standby
Nov 25 09:34:03 compute-0 python3[95913]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:03 compute-0 podman[95915]: 2025-11-25 09:34:03.746996057 +0000 UTC m=+0.030158112 container create 7c197677ecefcea9f9630473fab1322920beaa37d883aec6a4c53994db2e1bfb (image=quay.io/ceph/ceph:v19, name=elated_northcutt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:03 compute-0 systemd[1]: Started libpod-conmon-7c197677ecefcea9f9630473fab1322920beaa37d883aec6a4c53994db2e1bfb.scope.
Nov 25 09:34:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad3849ec19f5716f11590e54ecbead9d5b8e53690bcd61699a78f9be8dfb1e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad3849ec19f5716f11590e54ecbead9d5b8e53690bcd61699a78f9be8dfb1e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:03 compute-0 podman[95915]: 2025-11-25 09:34:03.801240451 +0000 UTC m=+0.084402516 container init 7c197677ecefcea9f9630473fab1322920beaa37d883aec6a4c53994db2e1bfb (image=quay.io/ceph/ceph:v19, name=elated_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 09:34:03 compute-0 podman[95915]: 2025-11-25 09:34:03.806303175 +0000 UTC m=+0.089465220 container start 7c197677ecefcea9f9630473fab1322920beaa37d883aec6a4c53994db2e1bfb (image=quay.io/ceph/ceph:v19, name=elated_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:34:03 compute-0 podman[95915]: 2025-11-25 09:34:03.807522844 +0000 UTC m=+0.090684899 container attach 7c197677ecefcea9f9630473fab1322920beaa37d883aec6a4c53994db2e1bfb (image=quay.io/ceph/ceph:v19, name=elated_northcutt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:34:03 compute-0 podman[95915]: 2025-11-25 09:34:03.73583111 +0000 UTC m=+0.018993185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:34:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v13: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 14 op/s
Nov 25 09:34:04 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14613 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:34:04 compute-0 elated_northcutt[95928]: 
Nov 25 09:34:04 compute-0 elated_northcutt[95928]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 09:34:04 compute-0 systemd[1]: libpod-7c197677ecefcea9f9630473fab1322920beaa37d883aec6a4c53994db2e1bfb.scope: Deactivated successfully.
Nov 25 09:34:04 compute-0 podman[95915]: 2025-11-25 09:34:04.093874866 +0000 UTC m=+0.377036931 container died 7c197677ecefcea9f9630473fab1322920beaa37d883aec6a4c53994db2e1bfb (image=quay.io/ceph/ceph:v19, name=elated_northcutt, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Nov 25 09:34:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ad3849ec19f5716f11590e54ecbead9d5b8e53690bcd61699a78f9be8dfb1e9-merged.mount: Deactivated successfully.
Nov 25 09:34:04 compute-0 podman[95915]: 2025-11-25 09:34:04.11542007 +0000 UTC m=+0.398582125 container remove 7c197677ecefcea9f9630473fab1322920beaa37d883aec6a4c53994db2e1bfb (image=quay.io/ceph/ceph:v19, name=elated_northcutt, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:34:04 compute-0 systemd[1]: libpod-conmon-7c197677ecefcea9f9630473fab1322920beaa37d883aec6a4c53994db2e1bfb.scope: Deactivated successfully.
Nov 25 09:34:04 compute-0 sudo[95911]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:04 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:04 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:04 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:04 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.knpqas", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 09:34:04 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.knpqas", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 09:34:04 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:04 compute-0 ceph-mon[74207]: Deploying daemon mds.cephfs.compute-1.knpqas on compute-1
Nov 25 09:34:04 compute-0 ceph-mon[74207]: mds.? [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] up:active
Nov 25 09:34:04 compute-0 ceph-mon[74207]: mds.? [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] up:boot
Nov 25 09:34:04 compute-0 ceph-mon[74207]: fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 1 up:standby
Nov 25 09:34:04 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wjveyw"}]: dispatch
Nov 25 09:34:04 compute-0 ceph-mon[74207]: fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 1 up:standby
Nov 25 09:34:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:34:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:34:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 25 09:34:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:04 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev 09527637-e393-449e-a08c-4693b99d6230 (Updating mds.cephfs deployment (+3 -> 3))
Nov 25 09:34:04 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 09527637-e393-449e-a08c-4693b99d6230 (Updating mds.cephfs deployment (+3 -> 3)) in 3 seconds
Nov 25 09:34:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Nov 25 09:34:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 25 09:34:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:04 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 658d71db-78ca-48f1-9fd6-9de800e0f8bc (Updating alertmanager deployment (+1 -> 1))
Nov 25 09:34:04 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Nov 25 09:34:04 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Nov 25 09:34:04 compute-0 sudo[95963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:04 compute-0 sudo[95963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:04 compute-0 sudo[95963]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:04 compute-0 sudo[96010]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxuriiijbpsfufxwhxiikdzdlmodqnhf ; /usr/bin/python3'
Nov 25 09:34:04 compute-0 sudo[96010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:04 compute-0 sudo[96013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:04 compute-0 sudo[96013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:04 compute-0 python3[96014]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:04 compute-0 podman[96039]: 2025-11-25 09:34:04.792341702 +0000 UTC m=+0.027689618 container create 372c8b01e17ca9b4e46b21924acf3efead369d4d87e588ec3525cc87c9f41480 (image=quay.io/ceph/ceph:v19, name=affectionate_ardinghelli, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:34:04 compute-0 systemd[1]: Started libpod-conmon-372c8b01e17ca9b4e46b21924acf3efead369d4d87e588ec3525cc87c9f41480.scope.
Nov 25 09:34:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37599ad699824f9fc4154ad3c74a94a9d457ed3985dac55f23803add53149bfb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37599ad699824f9fc4154ad3c74a94a9d457ed3985dac55f23803add53149bfb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:04 compute-0 podman[96039]: 2025-11-25 09:34:04.853555283 +0000 UTC m=+0.088903200 container init 372c8b01e17ca9b4e46b21924acf3efead369d4d87e588ec3525cc87c9f41480 (image=quay.io/ceph/ceph:v19, name=affectionate_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 09:34:04 compute-0 podman[96039]: 2025-11-25 09:34:04.85927183 +0000 UTC m=+0.094619745 container start 372c8b01e17ca9b4e46b21924acf3efead369d4d87e588ec3525cc87c9f41480 (image=quay.io/ceph/ceph:v19, name=affectionate_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:34:04 compute-0 podman[96039]: 2025-11-25 09:34:04.860434571 +0000 UTC m=+0.095782507 container attach 372c8b01e17ca9b4e46b21924acf3efead369d4d87e588ec3525cc87c9f41480 (image=quay.io/ceph/ceph:v19, name=affectionate_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:04 compute-0 podman[96039]: 2025-11-25 09:34:04.781307663 +0000 UTC m=+0.016655599 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:34:05 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14619 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:34:05 compute-0 affectionate_ardinghelli[96051]: 
Nov 25 09:34:05 compute-0 affectionate_ardinghelli[96051]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 25 09:34:05 compute-0 systemd[1]: libpod-372c8b01e17ca9b4e46b21924acf3efead369d4d87e588ec3525cc87c9f41480.scope: Deactivated successfully.
Nov 25 09:34:05 compute-0 conmon[96051]: conmon 372c8b01e17ca9b4e46b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-372c8b01e17ca9b4e46b21924acf3efead369d4d87e588ec3525cc87c9f41480.scope/container/memory.events
Nov 25 09:34:05 compute-0 podman[96039]: 2025-11-25 09:34:05.176784902 +0000 UTC m=+0.412132818 container died 372c8b01e17ca9b4e46b21924acf3efead369d4d87e588ec3525cc87c9f41480 (image=quay.io/ceph/ceph:v19, name=affectionate_ardinghelli, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-37599ad699824f9fc4154ad3c74a94a9d457ed3985dac55f23803add53149bfb-merged.mount: Deactivated successfully.
Nov 25 09:34:05 compute-0 podman[96039]: 2025-11-25 09:34:05.196858342 +0000 UTC m=+0.432206259 container remove 372c8b01e17ca9b4e46b21924acf3efead369d4d87e588ec3525cc87c9f41480 (image=quay.io/ceph/ceph:v19, name=affectionate_ardinghelli, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:34:05 compute-0 systemd[1]: libpod-conmon-372c8b01e17ca9b4e46b21924acf3efead369d4d87e588ec3525cc87c9f41480.scope: Deactivated successfully.
Nov 25 09:34:05 compute-0 sudo[96010]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e6 new map
Nov 25 09:34:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2025-11-25T09:34:05:420267+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:34:03.638490+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14601}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 14601 members: 14601
                                           [mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 2 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.knpqas{-1:24293} state up:standby seq 1 addr [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 09:34:05 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] up:boot
Nov 25 09:34:05 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 2 up:standby
Nov 25 09:34:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.knpqas"} v 0)
Nov 25 09:34:05 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.knpqas"}]: dispatch
Nov 25 09:34:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e6 all = 0
Nov 25 09:34:05 compute-0 ceph-mon[74207]: pgmap v13: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 14 op/s
Nov 25 09:34:05 compute-0 ceph-mon[74207]: from='client.14613 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:34:05 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:05 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:05 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:05 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:05 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:05 compute-0 ceph-mon[74207]: Deploying daemon alertmanager.compute-0 on compute-0
Nov 25 09:34:05 compute-0 ceph-mon[74207]: mds.? [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] up:boot
Nov 25 09:34:05 compute-0 ceph-mon[74207]: fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 2 up:standby
Nov 25 09:34:05 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.knpqas"}]: dispatch
Nov 25 09:34:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v14: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 12 op/s
Nov 25 09:34:05 compute-0 sudo[96207]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgjeskayhskbjgojzosjcnvpzftvrgxh ; /usr/bin/python3'
Nov 25 09:34:05 compute-0 sudo[96207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:06 compute-0 python3[96209]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:06 compute-0 podman[96210]: 2025-11-25 09:34:06.511353873 +0000 UTC m=+0.414262352 container create caf3cb7ec516c272bb0e1057efabfbfb5fd66d2c5daef3bd57a66952e3b345ac (image=quay.io/ceph/ceph:v19, name=jolly_kirch, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:34:06 compute-0 podman[96210]: 2025-11-25 09:34:06.461412019 +0000 UTC m=+0.364320508 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:34:06 compute-0 podman[96108]: 2025-11-25 09:34:06.527278652 +0000 UTC m=+1.543083805 volume create a3d0c1ae9be49af7dad91780739dfdd3c39de208ff25ca818d94830e7d7e0df8
Nov 25 09:34:06 compute-0 podman[96108]: 2025-11-25 09:34:06.531546327 +0000 UTC m=+1.547351480 container create 5faf550b99de3caca7a7c2fe7c2222e4edaec5bcbfd08791e25c5905b88822e9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=pedantic_mirzakhani, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:06 compute-0 systemd[1]: Started libpod-conmon-caf3cb7ec516c272bb0e1057efabfbfb5fd66d2c5daef3bd57a66952e3b345ac.scope.
Nov 25 09:34:06 compute-0 systemd[1]: Started libpod-conmon-5faf550b99de3caca7a7c2fe7c2222e4edaec5bcbfd08791e25c5905b88822e9.scope.
Nov 25 09:34:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:06 compute-0 ceph-mon[74207]: from='client.14619 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e352c1c86d4a44af9a024f4565a79fdcf8d05f1cc29cfa0515dd46a37245a85/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e352c1c86d4a44af9a024f4565a79fdcf8d05f1cc29cfa0515dd46a37245a85/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:06 compute-0 podman[96210]: 2025-11-25 09:34:06.560195864 +0000 UTC m=+0.463104343 container init caf3cb7ec516c272bb0e1057efabfbfb5fd66d2c5daef3bd57a66952e3b345ac (image=quay.io/ceph/ceph:v19, name=jolly_kirch, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:06 compute-0 podman[96210]: 2025-11-25 09:34:06.564593012 +0000 UTC m=+0.467501482 container start caf3cb7ec516c272bb0e1057efabfbfb5fd66d2c5daef3bd57a66952e3b345ac (image=quay.io/ceph/ceph:v19, name=jolly_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 25 09:34:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:06 compute-0 podman[96210]: 2025-11-25 09:34:06.56557308 +0000 UTC m=+0.468481548 container attach caf3cb7ec516c272bb0e1057efabfbfb5fd66d2c5daef3bd57a66952e3b345ac (image=quay.io/ceph/ceph:v19, name=jolly_kirch, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796a324b1a734855028db72083ade497ce3b5fd2d26dc8c9195b05dcad77287c/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:06 compute-0 podman[96108]: 2025-11-25 09:34:06.575511154 +0000 UTC m=+1.591316307 container init 5faf550b99de3caca7a7c2fe7c2222e4edaec5bcbfd08791e25c5905b88822e9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=pedantic_mirzakhani, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:06 compute-0 podman[96108]: 2025-11-25 09:34:06.579550308 +0000 UTC m=+1.595355450 container start 5faf550b99de3caca7a7c2fe7c2222e4edaec5bcbfd08791e25c5905b88822e9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=pedantic_mirzakhani, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:06 compute-0 podman[96108]: 2025-11-25 09:34:06.5806281 +0000 UTC m=+1.596433253 container attach 5faf550b99de3caca7a7c2fe7c2222e4edaec5bcbfd08791e25c5905b88822e9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=pedantic_mirzakhani, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:06 compute-0 pedantic_mirzakhani[96275]: 65534 65534
Nov 25 09:34:06 compute-0 systemd[1]: libpod-5faf550b99de3caca7a7c2fe7c2222e4edaec5bcbfd08791e25c5905b88822e9.scope: Deactivated successfully.
Nov 25 09:34:06 compute-0 podman[96108]: 2025-11-25 09:34:06.581548755 +0000 UTC m=+1.597353898 container died 5faf550b99de3caca7a7c2fe7c2222e4edaec5bcbfd08791e25c5905b88822e9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=pedantic_mirzakhani, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-796a324b1a734855028db72083ade497ce3b5fd2d26dc8c9195b05dcad77287c-merged.mount: Deactivated successfully.
Nov 25 09:34:06 compute-0 podman[96108]: 2025-11-25 09:34:06.597543997 +0000 UTC m=+1.613349140 container remove 5faf550b99de3caca7a7c2fe7c2222e4edaec5bcbfd08791e25c5905b88822e9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=pedantic_mirzakhani, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:06 compute-0 podman[96108]: 2025-11-25 09:34:06.598982408 +0000 UTC m=+1.614787551 volume remove a3d0c1ae9be49af7dad91780739dfdd3c39de208ff25ca818d94830e7d7e0df8
Nov 25 09:34:06 compute-0 podman[96108]: 2025-11-25 09:34:06.513504538 +0000 UTC m=+1.529309700 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 25 09:34:06 compute-0 systemd[1]: libpod-conmon-5faf550b99de3caca7a7c2fe7c2222e4edaec5bcbfd08791e25c5905b88822e9.scope: Deactivated successfully.
Nov 25 09:34:06 compute-0 podman[96290]: 2025-11-25 09:34:06.640980858 +0000 UTC m=+0.026655408 volume create d1e05023b27763849295c7e38558e1f0a3db65bfe086233eac3080e58971d3e1
Nov 25 09:34:06 compute-0 podman[96290]: 2025-11-25 09:34:06.644848028 +0000 UTC m=+0.030522578 container create 1b345adc113aafa1c5a6762fd6dde55ff594d91574ea866f3ea3572cb12fcf10 (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_herschel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:06 compute-0 ansible-async_wrapper.py[95477]: Done in kid B.
Nov 25 09:34:06 compute-0 systemd[1]: Started libpod-conmon-1b345adc113aafa1c5a6762fd6dde55ff594d91574ea866f3ea3572cb12fcf10.scope.
Nov 25 09:34:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5d5450f958427fb2fb23866ac0eac024f4aba1847b61f5762ab89776a8dee7/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:06 compute-0 podman[96290]: 2025-11-25 09:34:06.712941549 +0000 UTC m=+0.098616109 container init 1b345adc113aafa1c5a6762fd6dde55ff594d91574ea866f3ea3572cb12fcf10 (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_herschel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:06 compute-0 podman[96290]: 2025-11-25 09:34:06.717401466 +0000 UTC m=+0.103076016 container start 1b345adc113aafa1c5a6762fd6dde55ff594d91574ea866f3ea3572cb12fcf10 (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_herschel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:06 compute-0 amazing_herschel[96324]: 65534 65534
Nov 25 09:34:06 compute-0 podman[96290]: 2025-11-25 09:34:06.718509154 +0000 UTC m=+0.104183704 container attach 1b345adc113aafa1c5a6762fd6dde55ff594d91574ea866f3ea3572cb12fcf10 (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_herschel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:06 compute-0 systemd[1]: libpod-1b345adc113aafa1c5a6762fd6dde55ff594d91574ea866f3ea3572cb12fcf10.scope: Deactivated successfully.
Nov 25 09:34:06 compute-0 podman[96290]: 2025-11-25 09:34:06.719536741 +0000 UTC m=+0.105211291 container died 1b345adc113aafa1c5a6762fd6dde55ff594d91574ea866f3ea3572cb12fcf10 (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_herschel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:06 compute-0 podman[96290]: 2025-11-25 09:34:06.632394552 +0000 UTC m=+0.018069132 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 25 09:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e5d5450f958427fb2fb23866ac0eac024f4aba1847b61f5762ab89776a8dee7-merged.mount: Deactivated successfully.
Nov 25 09:34:06 compute-0 podman[96290]: 2025-11-25 09:34:06.738544152 +0000 UTC m=+0.124218702 container remove 1b345adc113aafa1c5a6762fd6dde55ff594d91574ea866f3ea3572cb12fcf10 (image=quay.io/prometheus/alertmanager:v0.25.0, name=amazing_herschel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:06 compute-0 podman[96290]: 2025-11-25 09:34:06.740135812 +0000 UTC m=+0.125810362 volume remove d1e05023b27763849295c7e38558e1f0a3db65bfe086233eac3080e58971d3e1
Nov 25 09:34:06 compute-0 systemd[1]: libpod-conmon-1b345adc113aafa1c5a6762fd6dde55ff594d91574ea866f3ea3572cb12fcf10.scope: Deactivated successfully.
Nov 25 09:34:06 compute-0 systemd[1]: Reloading.
Nov 25 09:34:06 compute-0 systemd-sysv-generator[96368]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:06 compute-0 systemd-rc-local-generator[96359]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:06 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.14625 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:34:06 compute-0 jolly_kirch[96272]: 
Nov 25 09:34:06 compute-0 jolly_kirch[96272]: [{"container_id": "4991d88b018f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.10%", "created": "2025-11-25T09:31:46.712328Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T09:33:53.538373Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2025-11-25T09:31:46.653599Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@crash.compute-0", "version": "19.2.3"}, {"container_id": "188b4764fe5a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.46%", "created": "2025-11-25T09:32:17.810878Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-25T09:33:53.392407Z", "memory_usage": 7817134, "ports": [], "service_name": "crash", "started": "2025-11-25T09:32:17.457570Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@crash.compute-1", "version": "19.2.3"}, {"container_id": "75aa60884316", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.27%", "created": "2025-11-25T09:33:04.393126Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-25T09:33:53.106349Z", "memory_usage": 7803502, "ports": [], "service_name": "crash", "started": "2025-11-25T09:33:04.325032Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@crash.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.compute-0.wjveyw", "daemon_name": "mds.cephfs.compute-0.wjveyw", "daemon_type": "mds", "events": ["2025-11-25T09:34:03.420389Z daemon:mds.cephfs.compute-0.wjveyw [INFO] \"Deployed mds.cephfs.compute-0.wjveyw on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-1.knpqas", "daemon_name": "mds.cephfs.compute-1.knpqas", "daemon_type": "mds", "events": ["2025-11-25T09:34:04.550100Z daemon:mds.cephfs.compute-1.knpqas [INFO] \"Deployed mds.cephfs.compute-1.knpqas on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-2.pwazzx", "daemon_name": "mds.cephfs.compute-2.pwazzx", "daemon_type": "mds", "events": ["2025-11-25T09:34:02.274297Z daemon:mds.cephfs.compute-2.pwazzx [INFO] \"Deployed mds.cephfs.compute-2.pwazzx on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "b4c97af4a954", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "26.45%", "created": "2025-11-25T09:31:18.758046Z", "daemon_id": "compute-0.zcfgby", "daemon_name": "mgr.compute-0.zcfgby", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T09:33:53.538303Z", "memory_usage": 540121497, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-25T09:31:18.694495Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@mgr.compute-0.zcfgby", "version": "19.2.3"}, {"container_id": "8e1b9a1b4f08", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "42.59%", "created": "2025-11-25T09:33:03.233203Z", "daemon_id": "compute-1.plffrn", "daemon_name": "mgr.compute-1.plffrn", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-25T09:33:53.392646Z", "memory_usage": 504260198, "ports": [8765], "service_name": "mgr", "started": "2025-11-25T09:33:03.171518Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@mgr.compute-1.plffrn", "version": "19.2.3"}, {"container_id": "ebeb76731ed5", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "38.52%", "created": "2025-11-25T09:32:58.144142Z", "daemon_id": "compute-2.flybft", "daemon_name": "mgr.compute-2.flybft", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-25T09:33:53.106278Z", "memory_usage": 506252492, "ports": [8765], "service_name": "mgr", "started": "2025-11-25T09:32:58.079142Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@mgr.compute-2.flybft", "version": "19.2.3"}, {"container_id": "f4319dd17981", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "1.83%", "created": "2025-11-25T09:31:16.034472Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T09:33:53.538207Z", "memory_request": 2147483648, "memory_usage": 58856570, "ports": [], "service_name": "mon", "started": "2025-11-25T09:31:17.515960Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@mon.compute-0", "version": "19.2.3"}, {"container_id": "20d26b9df30e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.53%", "created": "2025-11-25T09:32:57.020141Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-25T09:33:53.392563Z", "memory_request": 2147483648, "memory_usage": 44753223, "ports": [], "service_name": "mon", "started": "2025-11-25T09:32:56.958709Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@mon.compute-1", "version": "19.2.3"}, {"container_id": "548c61af73ea", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.76%", "created": "2025-11-25T09:32:51.106161Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-25T09:33:53.106175Z", "memory_request": 2147483648, "memory_usage": 43924848, "ports": [], "service_name": "mon", "started": "2025-11-25T09:32:50.570837Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@mon.compute-2", "version": "19.2.3"}, {"container_id": "dbe7cf1e9535", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e", "quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.10%", "created": "2025-11-25T09:33:33.475244Z", "daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T09:33:53.538575Z", "memory_usage": 4419747, "ports": [9100], "service_name": "node-exporter", "started": "2025-11-25T09:33:33.422561Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@node-exporter.compute-0", "version": "1.7.0"}, {"container_id": "2a1e927df99a", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e", "quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.16%", "created": "2025-11-25T09:33:43.339590Z", "daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-25T09:33:53.392783Z", "memory_usage": 4114612, "ports": [9100], "service_name": "node-exporter", "started": "2025-11-25T09:33:43.277679Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@node-exporter.compute-1", "version": "1.7.0"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2025-11-25T09:33:57.975005Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "c383e3b23555", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.22%", "created": "2025-11-25T09:32:26.214711Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T09:33:53.538442Z", "memory_request": 4294967296, "memory_usage": 66081259, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T09:32:26.150466Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@osd.1", "version": "19.2.3"}, {"container_id": "84467a07d50d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.70%", "created": "2025-11-25T09:32:26.076931Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-25T09:33:53.392496Z", "memory_request": 4294967296, "memory_usage": 58793656, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T09:32:26.016993Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@osd.0", "version": "19.2.3"}, {"container_id": "bdbe41bd7d1c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.16%", "created": "2025-11-25T09:33:12.843102Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-25T09:33:53.106416Z", "memory_request": 4294967296, "memory_usage": 57650708, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T09:33:12.779982Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@osd.2", "version": "19.2.3"}, {"container_id": "3d928580d6f3", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.40%", "created": "2025-11-25T09:33:31.244222Z", "daemon_id": "rgw.compute-0.uosdwi", "daemon_name": "rgw.rgw.compute-0.uosdwi", "daemon_type": "rgw", "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-11-25T09:33:53.538508Z", "memory_usage": 106325606, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-11-25T09:33:31.181564Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@rgw.rgw.compute-0.uosdwi", "version": "19.2.3"}, {"container_id": "811e4dee2065", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.38%", "created": "2025-11-25T09:33:30.057465Z", "daemon_id": "rgw.compute-1.lyczeh", "daemon_name": "rgw.rgw.compute-1.lyczeh", "daemon_type": "rgw", "hostname": "compute-1", "ip": "192.168.122.101", "is_active": false, "last_refresh": "2025-11-25T09:33:53.392711Z", "memory_usage": 104259911, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-11-25T09:33:29.997570Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@rgw.rgw.compute-1.lyczeh", "version": "19.2.3"}, {"container_id": "52e88d374316", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.29%", "created"
Nov 25 09:34:06 compute-0 jolly_kirch[96272]: : "2025-11-25T09:33:28.895077Z", "daemon_id": "rgw.compute-2.oidoiv", "daemon_name": "rgw.rgw.compute-2.oidoiv", "daemon_type": "rgw", "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "last_refresh": "2025-11-25T09:33:53.106488Z", "memory_usage": 103483965, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-11-25T09:33:28.832677Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@rgw.rgw.compute-2.oidoiv", "version": "19.2.3"}]
Nov 25 09:34:06 compute-0 podman[96210]: 2025-11-25 09:34:06.892489689 +0000 UTC m=+0.795398158 container died caf3cb7ec516c272bb0e1057efabfbfb5fd66d2c5daef3bd57a66952e3b345ac (image=quay.io/ceph/ceph:v19, name=jolly_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 25 09:34:06 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 8 completed events
Nov 25 09:34:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:34:06 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:06 compute-0 systemd[1]: libpod-caf3cb7ec516c272bb0e1057efabfbfb5fd66d2c5daef3bd57a66952e3b345ac.scope: Deactivated successfully.
Nov 25 09:34:06 compute-0 podman[96210]: 2025-11-25 09:34:06.975495732 +0000 UTC m=+0.878404200 container remove caf3cb7ec516c272bb0e1057efabfbfb5fd66d2c5daef3bd57a66952e3b345ac (image=quay.io/ceph/ceph:v19, name=jolly_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:34:06 compute-0 systemd[1]: libpod-conmon-caf3cb7ec516c272bb0e1057efabfbfb5fd66d2c5daef3bd57a66952e3b345ac.scope: Deactivated successfully.
Nov 25 09:34:06 compute-0 sudo[96207]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:07 compute-0 systemd[1]: Reloading.
Nov 25 09:34:07 compute-0 systemd-rc-local-generator[96412]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:07 compute-0 rsyslogd[961]: message too long (16383) with configured size 8096, begin of message is: [{"container_id": "4991d88b018f", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 25 09:34:07 compute-0 systemd-sysv-generator[96415]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e352c1c86d4a44af9a024f4565a79fdcf8d05f1cc29cfa0515dd46a37245a85-merged.mount: Deactivated successfully.
Nov 25 09:34:07 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:34:07 compute-0 podman[96467]: 2025-11-25 09:34:07.350527461 +0000 UTC m=+0.025515018 volume create e462e866e02932d54eb2ee75eeae45d16be498a10b71c45c1a27830307cef46b
Nov 25 09:34:07 compute-0 podman[96467]: 2025-11-25 09:34:07.355303594 +0000 UTC m=+0.030291151 container create 26e220db1d5c7d27472c73e3f52d829b2b169c850bfd4cac7803406968b3e9da (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f8578f82279ba71da6754a8d52d54e64e3669b64604653e0209b38374ffd41/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f8578f82279ba71da6754a8d52d54e64e3669b64604653e0209b38374ffd41/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:07 compute-0 podman[96467]: 2025-11-25 09:34:07.396793525 +0000 UTC m=+0.071781101 container init 26e220db1d5c7d27472c73e3f52d829b2b169c850bfd4cac7803406968b3e9da (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:07 compute-0 podman[96467]: 2025-11-25 09:34:07.40035021 +0000 UTC m=+0.075337767 container start 26e220db1d5c7d27472c73e3f52d829b2b169c850bfd4cac7803406968b3e9da (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:07 compute-0 bash[96467]: 26e220db1d5c7d27472c73e3f52d829b2b169c850bfd4cac7803406968b3e9da
Nov 25 09:34:07 compute-0 podman[96467]: 2025-11-25 09:34:07.341182084 +0000 UTC m=+0.016169661 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 25 09:34:07 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:34:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[96479]: ts=2025-11-25T09:34:07.423Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Nov 25 09:34:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[96479]: ts=2025-11-25T09:34:07.423Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Nov 25 09:34:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[96479]: ts=2025-11-25T09:34:07.429Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.26.109 port=9094
Nov 25 09:34:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[96479]: ts=2025-11-25T09:34:07.430Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Nov 25 09:34:07 compute-0 sudo[96013]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:34:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:34:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Nov 25 09:34:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev 658d71db-78ca-48f1-9fd6-9de800e0f8bc (Updating alertmanager deployment (+1 -> 1))
Nov 25 09:34:07 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 658d71db-78ca-48f1-9fd6-9de800e0f8bc (Updating alertmanager deployment (+1 -> 1)) in 3 seconds
Nov 25 09:34:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Nov 25 09:34:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev f3b65cae-a13f-45d7-b0ba-7b626974ada1 (Updating grafana deployment (+1 -> 1))
Nov 25 09:34:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[96479]: ts=2025-11-25T09:34:07.461Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 25 09:34:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[96479]: ts=2025-11-25T09:34:07.462Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 25 09:34:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[96479]: ts=2025-11-25T09:34:07.465Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Nov 25 09:34:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[96479]: ts=2025-11-25T09:34:07.465Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Nov 25 09:34:07 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Nov 25 09:34:07 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Nov 25 09:34:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Nov 25 09:34:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Nov 25 09:34:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Nov 25 09:34:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 25 09:34:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Nov 25 09:34:07 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 25 09:34:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Nov 25 09:34:07 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Nov 25 09:34:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e7 new map
Nov 25 09:34:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2025-11-25T09:34:07:562567+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:34:06.658104+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14601}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 14601 members: 14601
                                           [mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.knpqas{-1:24293} state up:standby seq 1 addr [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 09:34:07 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Updating MDS map to version 7 from mon.1
Nov 25 09:34:07 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] up:active
Nov 25 09:34:07 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] up:standby
Nov 25 09:34:07 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 2 up:standby
Nov 25 09:34:07 compute-0 ceph-mon[74207]: pgmap v14: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 12 op/s
Nov 25 09:34:07 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 25 09:34:07 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:07 compute-0 sudo[96496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:07 compute-0 sudo[96496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:07 compute-0 sudo[96496]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:07 compute-0 sudo[96554]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njibwzedqsvftcvtvyefjgltgsvgvqgz ; /usr/bin/python3'
Nov 25 09:34:07 compute-0 sudo[96554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:07 compute-0 sudo[96534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:07 compute-0 sudo[96534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:07 compute-0 python3[96569]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:07 compute-0 podman[96572]: 2025-11-25 09:34:07.773369446 +0000 UTC m=+0.031179677 container create 9c9e84deac60038c930d4b25c780233c25c532da0fa7b636fd317eaf662781f4 (image=quay.io/ceph/ceph:v19, name=zen_agnesi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:34:07 compute-0 systemd[1]: Started libpod-conmon-9c9e84deac60038c930d4b25c780233c25c532da0fa7b636fd317eaf662781f4.scope.
Nov 25 09:34:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95640a7d1b7519fc7ed9d2a70824439c3077013dd27b378dcef0a0439e32ecf1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95640a7d1b7519fc7ed9d2a70824439c3077013dd27b378dcef0a0439e32ecf1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:07 compute-0 podman[96572]: 2025-11-25 09:34:07.830032649 +0000 UTC m=+0.087842900 container init 9c9e84deac60038c930d4b25c780233c25c532da0fa7b636fd317eaf662781f4 (image=quay.io/ceph/ceph:v19, name=zen_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:07 compute-0 podman[96572]: 2025-11-25 09:34:07.83596886 +0000 UTC m=+0.093779091 container start 9c9e84deac60038c930d4b25c780233c25c532da0fa7b636fd317eaf662781f4 (image=quay.io/ceph/ceph:v19, name=zen_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:34:07 compute-0 podman[96572]: 2025-11-25 09:34:07.838968343 +0000 UTC m=+0.096778574 container attach 9c9e84deac60038c930d4b25c780233c25c532da0fa7b636fd317eaf662781f4 (image=quay.io/ceph/ceph:v19, name=zen_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:34:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v15: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 11 op/s
Nov 25 09:34:07 compute-0 podman[96572]: 2025-11-25 09:34:07.762085875 +0000 UTC m=+0.019896126 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:34:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 25 09:34:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3129853501' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:34:08 compute-0 zen_agnesi[96594]: 
Nov 25 09:34:08 compute-0 zen_agnesi[96594]: {"fsid":"af1c9ae3-08d7-5547-a53d-2cccf7c6ef90","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":66,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":41,"num_osds":3,"num_up_osds":3,"osd_up_since":1764063199,"num_in_osds":3,"osd_in_since":1764063185,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":12}],"num_pgs":12,"num_pools":12,"num_objects":216,"data_bytes":467025,"bytes_used":88682496,"bytes_avail":64323244032,"bytes_total":64411926528,"read_bytes_sec":18710,"write_bytes_sec":1488,"read_op_per_sec":6,"write_op_per_sec":5},"fsmap":{"epoch":7,"btime":"2025-11-25T09:34:07:562567+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.pwazzx","status":"up:active","gid":14601}],"up:standby":2},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":3,"modified":"2025-11-25T09:33:38.641268+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.zcfgby":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.plffrn":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.flybft":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14412":{"start_epoch":3,"start_stamp":"2025-11-25T09:33:37.895580+0000","gid":14412,"addr":"192.168.122.100:0/370176697","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC 7763 64-Core Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.uosdwi","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7865360","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"6af48147-6aba-44e3-91a3-565a32433f82","zone_name":"default","zonegroup_id":"7f877101-a613-42fa-9374-f143e99606e2","zonegroup_name":"default"},"task_status":{}},"24152":{"start_epoch":3,"start_stamp":"2025-11-25T09:33:37.825710+0000","gid":24152,"addr":"192.168.122.101:0/1293368742","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC 7763 64-Core Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.lyczeh","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7865372","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"6af48147-6aba-44e3-91a3-565a32433f82","zone_name":"default","zonegroup_id":"7f877101-a613-42fa-9374-f143e99606e2","zonegroup_name":"default"},"task_status":{}},"24163":{"start_epoch":3,"start_stamp":"2025-11-25T09:33:37.826789+0000","gid":24163,"addr":"192.168.122.102:0/1045634058","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC 7763 64-Core Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.oidoiv","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7865372","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"6af48147-6aba-44e3-91a3-565a32433f82","zone_name":"default","zonegroup_id":"7f877101-a613-42fa-9374-f143e99606e2","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"658d71db-78ca-48f1-9fd6-9de800e0f8bc":{"message":"Updating alertmanager deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 25 09:34:08 compute-0 systemd[1]: libpod-9c9e84deac60038c930d4b25c780233c25c532da0fa7b636fd317eaf662781f4.scope: Deactivated successfully.
Nov 25 09:34:08 compute-0 podman[96572]: 2025-11-25 09:34:08.169190995 +0000 UTC m=+0.427001226 container died 9c9e84deac60038c930d4b25c780233c25c532da0fa7b636fd317eaf662781f4 (image=quay.io/ceph/ceph:v19, name=zen_agnesi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-95640a7d1b7519fc7ed9d2a70824439c3077013dd27b378dcef0a0439e32ecf1-merged.mount: Deactivated successfully.
Nov 25 09:34:08 compute-0 podman[96572]: 2025-11-25 09:34:08.190465699 +0000 UTC m=+0.448275930 container remove 9c9e84deac60038c930d4b25c780233c25c532da0fa7b636fd317eaf662781f4 (image=quay.io/ceph/ceph:v19, name=zen_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:34:08 compute-0 sudo[96554]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:08 compute-0 systemd[1]: libpod-conmon-9c9e84deac60038c930d4b25c780233c25c532da0fa7b636fd317eaf662781f4.scope: Deactivated successfully.
Nov 25 09:34:08 compute-0 ceph-mon[74207]: from='client.14625 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 09:34:08 compute-0 ceph-mon[74207]: Regenerating cephadm self-signed grafana TLS certificates
Nov 25 09:34:08 compute-0 ceph-mon[74207]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 25 09:34:08 compute-0 ceph-mon[74207]: Deploying daemon grafana.compute-0 on compute-0
Nov 25 09:34:08 compute-0 ceph-mon[74207]: mds.? [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] up:active
Nov 25 09:34:08 compute-0 ceph-mon[74207]: mds.? [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] up:standby
Nov 25 09:34:08 compute-0 ceph-mon[74207]: fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 2 up:standby
Nov 25 09:34:08 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3129853501' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 09:34:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e8 new map
Nov 25 09:34:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2025-11-25T09:34:08:572718+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T09:33:52.871685+0000
                                           modified        2025-11-25T09:34:06.658104+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14601}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 14601 members: 14601
                                           [mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.knpqas{-1:24293} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 09:34:08 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] up:standby
Nov 25 09:34:08 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 2 up:standby
Nov 25 09:34:08 compute-0 sudo[96687]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycqnityadzdoijddouliufnoooiuvjzq ; /usr/bin/python3'
Nov 25 09:34:08 compute-0 sudo[96687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:08 compute-0 python3[96689]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:09 compute-0 podman[96695]: 2025-11-25 09:34:09.10524031 +0000 UTC m=+0.132645947 container create 761ed2db86c046d55ba8a40d8b6e6b20ec4ae6f24e83ce61b491a2e718cab1c8 (image=quay.io/ceph/ceph:v19, name=thirsty_clarke, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:09 compute-0 systemd[1]: Started libpod-conmon-761ed2db86c046d55ba8a40d8b6e6b20ec4ae6f24e83ce61b491a2e718cab1c8.scope.
Nov 25 09:34:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d626468b02c27fd934ed1fc23fefb9051cfeb71323307bbc578a9d1722aff9fd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d626468b02c27fd934ed1fc23fefb9051cfeb71323307bbc578a9d1722aff9fd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:09 compute-0 podman[96695]: 2025-11-25 09:34:09.092254861 +0000 UTC m=+0.119660518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:34:09 compute-0 podman[96695]: 2025-11-25 09:34:09.164816986 +0000 UTC m=+0.192222633 container init 761ed2db86c046d55ba8a40d8b6e6b20ec4ae6f24e83ce61b491a2e718cab1c8 (image=quay.io/ceph/ceph:v19, name=thirsty_clarke, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:34:09 compute-0 podman[96695]: 2025-11-25 09:34:09.169990528 +0000 UTC m=+0.197396165 container start 761ed2db86c046d55ba8a40d8b6e6b20ec4ae6f24e83ce61b491a2e718cab1c8 (image=quay.io/ceph/ceph:v19, name=thirsty_clarke, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:34:09 compute-0 podman[96695]: 2025-11-25 09:34:09.171192193 +0000 UTC m=+0.198597831 container attach 761ed2db86c046d55ba8a40d8b6e6b20ec4ae6f24e83ce61b491a2e718cab1c8 (image=quay.io/ceph/ceph:v19, name=thirsty_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:34:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[96479]: ts=2025-11-25T09:34:09.431Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000603951s
Nov 25 09:34:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 25 09:34:09 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1994516596' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:34:09 compute-0 thirsty_clarke[96753]: 
Nov 25 09:34:09 compute-0 systemd[1]: libpod-761ed2db86c046d55ba8a40d8b6e6b20ec4ae6f24e83ce61b491a2e718cab1c8.scope: Deactivated successfully.
Nov 25 09:34:09 compute-0 thirsty_clarke[96753]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_SSL_VERIFY","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.zcfgby/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.plffrn/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.flybft/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.uosdwi","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.lyczeh","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.oidoiv","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 25 09:34:09 compute-0 podman[96695]: 2025-11-25 09:34:09.472696289 +0000 UTC m=+0.500101926 container died 761ed2db86c046d55ba8a40d8b6e6b20ec4ae6f24e83ce61b491a2e718cab1c8 (image=quay.io/ceph/ceph:v19, name=thirsty_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:09 compute-0 ceph-mon[74207]: pgmap v15: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 11 op/s
Nov 25 09:34:09 compute-0 ceph-mon[74207]: mds.? [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] up:standby
Nov 25 09:34:09 compute-0 ceph-mon[74207]: fsmap cephfs:1 {0=cephfs.compute-2.pwazzx=up:active} 2 up:standby
Nov 25 09:34:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1994516596' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 09:34:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v16: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 9 op/s
Nov 25 09:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-d626468b02c27fd934ed1fc23fefb9051cfeb71323307bbc578a9d1722aff9fd-merged.mount: Deactivated successfully.
Nov 25 09:34:09 compute-0 podman[96695]: 2025-11-25 09:34:09.87261922 +0000 UTC m=+0.900024857 container remove 761ed2db86c046d55ba8a40d8b6e6b20ec4ae6f24e83ce61b491a2e718cab1c8 (image=quay.io/ceph/ceph:v19, name=thirsty_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 09:34:09 compute-0 systemd[1]: libpod-conmon-761ed2db86c046d55ba8a40d8b6e6b20ec4ae6f24e83ce61b491a2e718cab1c8.scope: Deactivated successfully.
Nov 25 09:34:09 compute-0 sudo[96687]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:10 compute-0 sudo[96876]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyaxoiqxtzzpnxneazbkiqrpllhejtlt ; /usr/bin/python3'
Nov 25 09:34:10 compute-0 sudo[96876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:10 compute-0 python3[96878]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:10 compute-0 podman[96879]: 2025-11-25 09:34:10.673523867 +0000 UTC m=+0.029072634 container create 16711dbaadd4039fd82180f6baf341a14749ded1f133a09330c9f885077fb025 (image=quay.io/ceph/ceph:v19, name=epic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:34:10 compute-0 systemd[1]: Started libpod-conmon-16711dbaadd4039fd82180f6baf341a14749ded1f133a09330c9f885077fb025.scope.
Nov 25 09:34:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cecde1a7fe2ccab465df37ba3622faa570cd23afc356916f6935ee71e06d9fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cecde1a7fe2ccab465df37ba3622faa570cd23afc356916f6935ee71e06d9fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:10 compute-0 podman[96879]: 2025-11-25 09:34:10.725693381 +0000 UTC m=+0.081242158 container init 16711dbaadd4039fd82180f6baf341a14749ded1f133a09330c9f885077fb025 (image=quay.io/ceph/ceph:v19, name=epic_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:34:10 compute-0 podman[96879]: 2025-11-25 09:34:10.729981855 +0000 UTC m=+0.085530622 container start 16711dbaadd4039fd82180f6baf341a14749ded1f133a09330c9f885077fb025 (image=quay.io/ceph/ceph:v19, name=epic_hofstadter, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:34:10 compute-0 podman[96879]: 2025-11-25 09:34:10.731099712 +0000 UTC m=+0.086648489 container attach 16711dbaadd4039fd82180f6baf341a14749ded1f133a09330c9f885077fb025 (image=quay.io/ceph/ceph:v19, name=epic_hofstadter, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:34:10 compute-0 podman[96879]: 2025-11-25 09:34:10.662567865 +0000 UTC m=+0.018116652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:34:11 compute-0 epic_hofstadter[96891]: mimic
Nov 25 09:34:11 compute-0 systemd[1]: libpod-16711dbaadd4039fd82180f6baf341a14749ded1f133a09330c9f885077fb025.scope: Deactivated successfully.
Nov 25 09:34:11 compute-0 podman[96879]: 2025-11-25 09:34:11.040576016 +0000 UTC m=+0.396124783 container died 16711dbaadd4039fd82180f6baf341a14749ded1f133a09330c9f885077fb025 (image=quay.io/ceph/ceph:v19, name=epic_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 25 09:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cecde1a7fe2ccab465df37ba3622faa570cd23afc356916f6935ee71e06d9fc-merged.mount: Deactivated successfully.
Nov 25 09:34:11 compute-0 podman[96879]: 2025-11-25 09:34:11.064680283 +0000 UTC m=+0.420229051 container remove 16711dbaadd4039fd82180f6baf341a14749ded1f133a09330c9f885077fb025 (image=quay.io/ceph/ceph:v19, name=epic_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 09:34:11 compute-0 systemd[1]: libpod-conmon-16711dbaadd4039fd82180f6baf341a14749ded1f133a09330c9f885077fb025.scope: Deactivated successfully.
Nov 25 09:34:11 compute-0 sudo[96876]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:11 compute-0 ceph-mon[74207]: pgmap v16: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 9 op/s
Nov 25 09:34:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1803730132' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 25 09:34:11 compute-0 sudo[96962]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adrruskbgdpuxovccunyktlnkotrtwxy ; /usr/bin/python3'
Nov 25 09:34:11 compute-0 sudo[96962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v17: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 09:34:11 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 9 completed events
Nov 25 09:34:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:34:11 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:11 compute-0 python3[96964]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:12 compute-0 ceph-mon[74207]: pgmap v17: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 09:34:12 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v18: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 09:34:14 compute-0 podman[96965]: 2025-11-25 09:34:14.054139011 +0000 UTC m=+2.098357864 container create d14c9cf88b2e32b2d2b7b442c5c49cac002b6e191de5e96f1cf0c01793c261ac (image=quay.io/ceph/ceph:v19, name=priceless_sammet, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:34:14 compute-0 systemd[1]: Started libpod-conmon-d14c9cf88b2e32b2d2b7b442c5c49cac002b6e191de5e96f1cf0c01793c261ac.scope.
Nov 25 09:34:14 compute-0 podman[96965]: 2025-11-25 09:34:14.037615152 +0000 UTC m=+2.081834025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:34:14 compute-0 podman[96641]: 2025-11-25 09:34:14.081431449 +0000 UTC m=+6.120385431 container create a569fd77a0afb8d8ec314ae276e279fd171bb25659046eb6870f0bb5bde099e8 (image=quay.io/ceph/grafana:10.4.0, name=vigilant_antonelli, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 podman[96641]: 2025-11-25 09:34:14.069520315 +0000 UTC m=+6.108474308 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 25 09:34:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2707a74eb727a5b3cd14f4c64ecf6786664d9b1bce0e4635fbf15d388086ea23/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2707a74eb727a5b3cd14f4c64ecf6786664d9b1bce0e4635fbf15d388086ea23/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:14 compute-0 podman[96965]: 2025-11-25 09:34:14.100203265 +0000 UTC m=+2.144422128 container init d14c9cf88b2e32b2d2b7b442c5c49cac002b6e191de5e96f1cf0c01793c261ac (image=quay.io/ceph/ceph:v19, name=priceless_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:14 compute-0 systemd[1]: Started libpod-conmon-a569fd77a0afb8d8ec314ae276e279fd171bb25659046eb6870f0bb5bde099e8.scope.
Nov 25 09:34:14 compute-0 podman[96965]: 2025-11-25 09:34:14.104704529 +0000 UTC m=+2.148923383 container start d14c9cf88b2e32b2d2b7b442c5c49cac002b6e191de5e96f1cf0c01793c261ac (image=quay.io/ceph/ceph:v19, name=priceless_sammet, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:14 compute-0 podman[96965]: 2025-11-25 09:34:14.107046755 +0000 UTC m=+2.151265608 container attach d14c9cf88b2e32b2d2b7b442c5c49cac002b6e191de5e96f1cf0c01793c261ac (image=quay.io/ceph/ceph:v19, name=priceless_sammet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:34:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:14 compute-0 podman[96641]: 2025-11-25 09:34:14.126659927 +0000 UTC m=+6.165613930 container init a569fd77a0afb8d8ec314ae276e279fd171bb25659046eb6870f0bb5bde099e8 (image=quay.io/ceph/grafana:10.4.0, name=vigilant_antonelli, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 podman[96641]: 2025-11-25 09:34:14.130602409 +0000 UTC m=+6.169556392 container start a569fd77a0afb8d8ec314ae276e279fd171bb25659046eb6870f0bb5bde099e8 (image=quay.io/ceph/grafana:10.4.0, name=vigilant_antonelli, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 vigilant_antonelli[97022]: 472 0
Nov 25 09:34:14 compute-0 systemd[1]: libpod-a569fd77a0afb8d8ec314ae276e279fd171bb25659046eb6870f0bb5bde099e8.scope: Deactivated successfully.
Nov 25 09:34:14 compute-0 podman[96641]: 2025-11-25 09:34:14.132176557 +0000 UTC m=+6.171130560 container attach a569fd77a0afb8d8ec314ae276e279fd171bb25659046eb6870f0bb5bde099e8 (image=quay.io/ceph/grafana:10.4.0, name=vigilant_antonelli, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 conmon[97022]: conmon a569fd77a0afb8d8ec31 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a569fd77a0afb8d8ec314ae276e279fd171bb25659046eb6870f0bb5bde099e8.scope/container/memory.events
Nov 25 09:34:14 compute-0 podman[96641]: 2025-11-25 09:34:14.132950084 +0000 UTC m=+6.171904067 container died a569fd77a0afb8d8ec314ae276e279fd171bb25659046eb6870f0bb5bde099e8 (image=quay.io/ceph/grafana:10.4.0, name=vigilant_antonelli, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-eee118ad8c289d03343698d2f5ce9b30306c9ab0b93f085792afe71be7c5e5cf-merged.mount: Deactivated successfully.
Nov 25 09:34:14 compute-0 podman[96641]: 2025-11-25 09:34:14.151731268 +0000 UTC m=+6.190685252 container remove a569fd77a0afb8d8ec314ae276e279fd171bb25659046eb6870f0bb5bde099e8 (image=quay.io/ceph/grafana:10.4.0, name=vigilant_antonelli, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 systemd[1]: libpod-conmon-a569fd77a0afb8d8ec314ae276e279fd171bb25659046eb6870f0bb5bde099e8.scope: Deactivated successfully.
Nov 25 09:34:14 compute-0 podman[97035]: 2025-11-25 09:34:14.196378241 +0000 UTC m=+0.029771332 container create 75e5d9abc5fb1709538734a751e0b4c94b1e8239ffeaa6d270cf387d0576fa36 (image=quay.io/ceph/grafana:10.4.0, name=vigilant_elgamal, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 systemd[1]: Started libpod-conmon-75e5d9abc5fb1709538734a751e0b4c94b1e8239ffeaa6d270cf387d0576fa36.scope.
Nov 25 09:34:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:14 compute-0 podman[97035]: 2025-11-25 09:34:14.239661693 +0000 UTC m=+0.073054804 container init 75e5d9abc5fb1709538734a751e0b4c94b1e8239ffeaa6d270cf387d0576fa36 (image=quay.io/ceph/grafana:10.4.0, name=vigilant_elgamal, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 podman[97035]: 2025-11-25 09:34:14.244348729 +0000 UTC m=+0.077741820 container start 75e5d9abc5fb1709538734a751e0b4c94b1e8239ffeaa6d270cf387d0576fa36 (image=quay.io/ceph/grafana:10.4.0, name=vigilant_elgamal, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 vigilant_elgamal[97069]: 472 0
Nov 25 09:34:14 compute-0 podman[97035]: 2025-11-25 09:34:14.245664789 +0000 UTC m=+0.079057880 container attach 75e5d9abc5fb1709538734a751e0b4c94b1e8239ffeaa6d270cf387d0576fa36 (image=quay.io/ceph/grafana:10.4.0, name=vigilant_elgamal, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 systemd[1]: libpod-75e5d9abc5fb1709538734a751e0b4c94b1e8239ffeaa6d270cf387d0576fa36.scope: Deactivated successfully.
Nov 25 09:34:14 compute-0 podman[97035]: 2025-11-25 09:34:14.246655858 +0000 UTC m=+0.080048949 container died 75e5d9abc5fb1709538734a751e0b4c94b1e8239ffeaa6d270cf387d0576fa36 (image=quay.io/ceph/grafana:10.4.0, name=vigilant_elgamal, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7729bf29b02124ad5386a8e8e2f157d76be71202c2f3c92e3f498a11035d7474-merged.mount: Deactivated successfully.
Nov 25 09:34:14 compute-0 podman[97035]: 2025-11-25 09:34:14.26424285 +0000 UTC m=+0.097635941 container remove 75e5d9abc5fb1709538734a751e0b4c94b1e8239ffeaa6d270cf387d0576fa36 (image=quay.io/ceph/grafana:10.4.0, name=vigilant_elgamal, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 podman[97035]: 2025-11-25 09:34:14.182780749 +0000 UTC m=+0.016173860 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 25 09:34:14 compute-0 systemd[1]: libpod-conmon-75e5d9abc5fb1709538734a751e0b4c94b1e8239ffeaa6d270cf387d0576fa36.scope: Deactivated successfully.
Nov 25 09:34:14 compute-0 systemd[1]: Reloading.
Nov 25 09:34:14 compute-0 systemd-sysv-generator[97111]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:14 compute-0 systemd-rc-local-generator[97108]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Nov 25 09:34:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/580932889' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 25 09:34:14 compute-0 priceless_sammet[97017]: 
Nov 25 09:34:14 compute-0 priceless_sammet[97017]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Nov 25 09:34:14 compute-0 podman[96965]: 2025-11-25 09:34:14.457342365 +0000 UTC m=+2.501561219 container died d14c9cf88b2e32b2d2b7b442c5c49cac002b6e191de5e96f1cf0c01793c261ac (image=quay.io/ceph/ceph:v19, name=priceless_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:34:14 compute-0 systemd[1]: libpod-d14c9cf88b2e32b2d2b7b442c5c49cac002b6e191de5e96f1cf0c01793c261ac.scope: Deactivated successfully.
Nov 25 09:34:14 compute-0 podman[96965]: 2025-11-25 09:34:14.517493384 +0000 UTC m=+2.561712237 container remove d14c9cf88b2e32b2d2b7b442c5c49cac002b6e191de5e96f1cf0c01793c261ac (image=quay.io/ceph/ceph:v19, name=priceless_sammet, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 25 09:34:14 compute-0 systemd[1]: libpod-conmon-d14c9cf88b2e32b2d2b7b442c5c49cac002b6e191de5e96f1cf0c01793c261ac.scope: Deactivated successfully.
Nov 25 09:34:14 compute-0 sudo[96962]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:14 compute-0 systemd[1]: Reloading.
Nov 25 09:34:14 compute-0 systemd-rc-local-generator[97158]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:14 compute-0 systemd-sysv-generator[97161]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2707a74eb727a5b3cd14f4c64ecf6786664d9b1bce0e4635fbf15d388086ea23-merged.mount: Deactivated successfully.
Nov 25 09:34:14 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:34:14 compute-0 podman[97212]: 2025-11-25 09:34:14.886207424 +0000 UTC m=+0.032253151 container create e68646e3fd07566db62d42edd0b076924b9245803f1164555eac0bfb296d8565 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde3a6c663f06f99580f42e40c265979e194f152a4b24a2a4c7be1556b51ff4e/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde3a6c663f06f99580f42e40c265979e194f152a4b24a2a4c7be1556b51ff4e/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde3a6c663f06f99580f42e40c265979e194f152a4b24a2a4c7be1556b51ff4e/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde3a6c663f06f99580f42e40c265979e194f152a4b24a2a4c7be1556b51ff4e/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde3a6c663f06f99580f42e40c265979e194f152a4b24a2a4c7be1556b51ff4e/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:14 compute-0 podman[97212]: 2025-11-25 09:34:14.930216444 +0000 UTC m=+0.076262180 container init e68646e3fd07566db62d42edd0b076924b9245803f1164555eac0bfb296d8565 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 podman[97212]: 2025-11-25 09:34:14.934486904 +0000 UTC m=+0.080532620 container start e68646e3fd07566db62d42edd0b076924b9245803f1164555eac0bfb296d8565 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:14 compute-0 bash[97212]: e68646e3fd07566db62d42edd0b076924b9245803f1164555eac0bfb296d8565
Nov 25 09:34:14 compute-0 podman[97212]: 2025-11-25 09:34:14.872085874 +0000 UTC m=+0.018131590 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 25 09:34:14 compute-0 systemd[1]: Started Ceph grafana.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:34:14 compute-0 ceph-mon[74207]: pgmap v18: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 09:34:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/580932889' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 25 09:34:14 compute-0 sudo[96534]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:34:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:34:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Nov 25 09:34:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:14 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev f3b65cae-a13f-45d7-b0ba-7b626974ada1 (Updating grafana deployment (+1 -> 1))
Nov 25 09:34:14 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event f3b65cae-a13f-45d7-b0ba-7b626974ada1 (Updating grafana deployment (+1 -> 1)) in 8 seconds
Nov 25 09:34:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Nov 25 09:34:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:14 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev c67d856d-13e5-4608-8320-1334780a23e9 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 25 09:34:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Nov 25 09:34:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:15 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.jgcdmc on compute-0
Nov 25 09:34:15 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.jgcdmc on compute-0
Nov 25 09:34:15 compute-0 sudo[97240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:15 compute-0 sudo[97240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:15 compute-0 sudo[97240]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081401819Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-11-25T09:34:15Z
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.08161993Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081627925Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081631862Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081635509Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081638826Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081641992Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081645057Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081648884Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.08165207Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081655076Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081657931Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081660967Z level=info msg=Target target=[all]
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081666587Z level=info msg="Path Home" path=/usr/share/grafana
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081669573Z level=info msg="Path Data" path=/var/lib/grafana
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081672409Z level=info msg="Path Logs" path=/var/log/grafana
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081675394Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.08167849Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=settings t=2025-11-25T09:34:15.081681416Z level=info msg="App mode production"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=sqlstore t=2025-11-25T09:34:15.081951425Z level=info msg="Connecting to DB" dbtype=sqlite3
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=sqlstore t=2025-11-25T09:34:15.081965922Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.08251107Z level=info msg="Starting DB migrations"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.083564906Z level=info msg="Executing migration" id="create migration_log table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.084472577Z level=info msg="Migration successfully executed" id="create migration_log table" duration=907.209µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.085526584Z level=info msg="Executing migration" id="create user table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.086180847Z level=info msg="Migration successfully executed" id="create user table" duration=654.203µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.087020941Z level=info msg="Executing migration" id="add unique index user.login"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.087596907Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=575.535µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.088288259Z level=info msg="Executing migration" id="add unique index user.email"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.088911626Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=602.766µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.089608659Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.090199964Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=591.084µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.091029267Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.091599723Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=570.515µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.092346771Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Nov 25 09:34:15 compute-0 sudo[97265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.094642188Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.293754ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.095434371Z level=info msg="Executing migration" id="create user table v2"
Nov 25 09:34:15 compute-0 sudo[97265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.096301855Z level=info msg="Migration successfully executed" id="create user table v2" duration=867.234µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.097053604Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.097692047Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=638.694µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.098424698Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.099071127Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=645.997µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.099812405Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.100226385Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=413.58µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.100957663Z level=info msg="Executing migration" id="Drop old table user_v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.101472894Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=515.101µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.102154408Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.103082077Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=927.408µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.103950234Z level=info msg="Executing migration" id="Update user table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.104063017Z level=info msg="Migration successfully executed" id="Update user table charset" duration=114.075µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.104959095Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.105852339Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=892.453µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.106773135Z level=info msg="Executing migration" id="Add missing user data"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.107061619Z level=info msg="Migration successfully executed" id="Add missing user data" duration=287.312µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.107853472Z level=info msg="Executing migration" id="Add is_disabled column to user"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.108758698Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=905.478µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.109509312Z level=info msg="Executing migration" id="Add index user.login/user.email"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.110190397Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=680.122µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.110853767Z level=info msg="Executing migration" id="Add is_service_account column to user"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.111795914Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=941.183µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.112534997Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.118564172Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.029106ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.119411459Z level=info msg="Executing migration" id="Add uid column to user"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.120361431Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=949.65µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.121112385Z level=info msg="Executing migration" id="Update uid column values for users"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.121360464Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=248.298µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.122124323Z level=info msg="Executing migration" id="Add unique index user_uid"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.12277512Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=651.448µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.123550411Z level=info msg="Executing migration" id="create temp user table v1-7"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.124230463Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=680.032µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.125163573Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.125798299Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=634.446µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.126719295Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.12735845Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=638.854µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.128217298Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.128850442Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=632.111µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.129747754Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.13039269Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=644.675µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.131233004Z level=info msg="Executing migration" id="Update temp_user table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.131343151Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=110.568µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.132146896Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.132775312Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=628.305µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.133510517Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.134211328Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=700.501µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.13490742Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.135539392Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=631.621µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.136272434Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.136913522Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=640.838µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.137658736Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.140224704Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.566559ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.140993042Z level=info msg="Executing migration" id="create temp_user v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.141685858Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=692.674µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.142436924Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.143196335Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=757.808µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.145042896Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.1456974Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=654.405µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.146457453Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.147139919Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=682.227µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.147860287Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.148518768Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=658.522µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.149361797Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.149770768Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=408.721µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.150478433Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.151009284Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=530.892µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.15170736Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.152111432Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=403.771µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.152851205Z level=info msg="Executing migration" id="create star table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.153466866Z level=info msg="Migration successfully executed" id="create star table" duration=615.52µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.15424904Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.15492771Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=678.329µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.155745231Z level=info msg="Executing migration" id="create org table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.156388874Z level=info msg="Migration successfully executed" id="create org table v1" duration=643.713µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.157232434Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.157885485Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=652.9µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.158725299Z level=info msg="Executing migration" id="create org_user table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.159332413Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=606.884µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.160128845Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.160819978Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=690.652µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.161595469Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.162330505Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=734.696µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.163117257Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.163783213Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=665.826µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.164597998Z level=info msg="Executing migration" id="Update org table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.164708056Z level=info msg="Migration successfully executed" id="Update org table charset" duration=110.288µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.165487205Z level=info msg="Executing migration" id="Update org_user table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.165604295Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=117.732µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.16637572Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.166602708Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=226.527µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.167349416Z level=info msg="Executing migration" id="create dashboard table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.168077107Z level=info msg="Migration successfully executed" id="create dashboard table" duration=727.591µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.168868028Z level=info msg="Executing migration" id="add index dashboard.account_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.16960124Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=733.242µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.170544669Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.171270086Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=725.247µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.172197765Z level=info msg="Executing migration" id="create dashboard_tag table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.172809267Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=610.03µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.174047112Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.174779902Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=733.813µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.175681612Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.176285842Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=605.441µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.176883408Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.180931649Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.04813ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.181754059Z level=info msg="Executing migration" id="create dashboard v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.182374178Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=620.129µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.183038079Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.183675792Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=637.713µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.184573725Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.185248005Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=674µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.186112946Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.18642301Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=309.963µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.18709673Z level=info msg="Executing migration" id="drop table dashboard_v1"
Nov 25 09:34:15 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.187922016Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=825.095µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.188561421Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.188611156Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=50.135µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.189328207Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.19065093Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.322553ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.196838033Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.198237542Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.400881ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.199242305Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.201768417Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.525992ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.202669766Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.203462611Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=793.005µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.204355044Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.205966901Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.611567ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.2067453Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.207542803Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=797.353µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.208328504Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.209097012Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=768.398µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.210116715Z level=info msg="Executing migration" id="Update dashboard table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.210244185Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=128.091µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.211104496Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.211231757Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=127.872µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.212085866Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.213512457Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.42658ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.214256438Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.215649885Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.394098ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.216441017Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.217840635Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.399818ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.218647105Z level=info msg="Executing migration" id="Add column uid in dashboard"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.220084023Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.436899ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.220961258Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.221248329Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=286.93µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.222139749Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.222906714Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=766.816µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.223939282Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.224678565Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=739.324µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.225530701Z level=info msg="Executing migration" id="Update dashboard title length"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.225665295Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=133.883µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.226532991Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.227293173Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=759.941µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.228069337Z level=info msg="Executing migration" id="create dashboard_provisioning"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.228697661Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=628.374µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.229517736Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.233104879Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.587953ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.233951014Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.234581392Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=630.297µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.235463736Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.236114552Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=650.194µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.236854237Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.237512086Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=657.699µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.238299611Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.238601961Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=302.169µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.239194599Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.239868509Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=673.71µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.240555775Z level=info msg="Executing migration" id="Add check_sum column"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.242015927Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.460002ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.242765019Z level=info msg="Executing migration" id="Add index for dashboard_title"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.243393884Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=628.975µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.24424059Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.244407656Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=167.176µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.245376141Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.245539309Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=163.498µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.246183513Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.246782883Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=599.339µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.247543586Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.249064503Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.520626ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.249790363Z level=info msg="Executing migration" id="create data_source table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.250512694Z level=info msg="Migration successfully executed" id="create data_source table" duration=722.192µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.251324524Z level=info msg="Executing migration" id="add index data_source.account_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.251978227Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=652.14µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.252729663Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.253362966Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=633.053µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.254088124Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.254698003Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=609.489µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.255373687Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.256004176Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=631.02µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.256742227Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.260553631Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=3.811215ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.261770505Z level=info msg="Executing migration" id="create data_source table v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.262516232Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=745.646µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.263261906Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.263953801Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=691.914µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.264580082Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.265266385Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=685.994µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.266049732Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.266517234Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=467.342µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.267237011Z level=info msg="Executing migration" id="Add column with_credentials"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.268786451Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.549309ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.269577452Z level=info msg="Executing migration" id="Add secure json data column"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.27119454Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.617018ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.271866988Z level=info msg="Executing migration" id="Update data_source table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.271936008Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=69.462µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.272660654Z level=info msg="Executing migration" id="Update initial version to 1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.272842908Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=182.404µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.273598391Z level=info msg="Executing migration" id="Add read_only data column"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.275250626Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.651975ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.276079899Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.27626577Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=186.261µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.277001707Z level=info msg="Executing migration" id="Update json_data with nulls"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.277176076Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=175.341µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.277849906Z level=info msg="Executing migration" id="Add uid column"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.279527759Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.677603ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.280225404Z level=info msg="Executing migration" id="Update uid value"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.280701742Z level=info msg="Migration successfully executed" id="Update uid value" duration=476.398µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.281442298Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.282134183Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=691.694µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.282813833Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.283450243Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=635.86µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.284226987Z level=info msg="Executing migration" id="create api_key table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.284868517Z level=info msg="Migration successfully executed" id="create api_key table" duration=641.34µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.285690717Z level=info msg="Executing migration" id="add index api_key.account_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.286368013Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=676.796µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.287158594Z level=info msg="Executing migration" id="add index api_key.key"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.287779925Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=621.03µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.288568151Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.289260416Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=692.015µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.290082696Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.290710669Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=627.633µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.291385141Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.292022722Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=637.742µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.29278491Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.293427581Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=642.43µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.294139342Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.298347505Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=4.207882ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.299138416Z level=info msg="Executing migration" id="create api_key table v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.299744659Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=606.123µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.300504412Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.30115621Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=651.608µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.301847904Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.3024663Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=618.136µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.303149086Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.303778494Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=629.156µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.304568513Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.304920086Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=351.261µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.305561744Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.306082036Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=520.151µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.306749784Z level=info msg="Executing migration" id="Update api_key table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.306801572Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=52.25µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.307672053Z level=info msg="Executing migration" id="Add expires to api_key table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.309414137Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.742905ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.310124666Z level=info msg="Executing migration" id="Add service account foreign key"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.311760369Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.635693ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.31241849Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.312576598Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=158.167µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.313584879Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.315356639Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.771798ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.316089992Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.317813891Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.72378ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.318515234Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.319161091Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=646.348µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.319831624Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.320323292Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=491.547µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.321030585Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.321714354Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=683.599µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.32233275Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.323005428Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=672.548µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.323751664Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.324438329Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=686.483µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.325197479Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.325935651Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=737.682µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.326715452Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.326797847Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=82.685µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.327684117Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.327750172Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=66.425µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.328567803Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.330448448Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=1.879554ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.331231915Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.333154429Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.919799ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.334073832Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.334154675Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=80.733µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.334967356Z level=info msg="Executing migration" id="create quota table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.335546799Z level=info msg="Migration successfully executed" id="create quota table v1" duration=578.281µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.336501578Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.337217287Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=716.511µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.338171827Z level=info msg="Executing migration" id="Update quota table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.338236589Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=65.222µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.339002643Z level=info msg="Executing migration" id="create plugin_setting table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.33963226Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=629.287µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.340429423Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.341156463Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=726.72µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.342142743Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.344035902Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=1.891956ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.345012732Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.345081202Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=68.931µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.345942345Z level=info msg="Executing migration" id="create session table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.34665575Z level=info msg="Migration successfully executed" id="create session table" duration=713.185µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.347680021Z level=info msg="Executing migration" id="Drop old table playlist table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.34778591Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=106.27µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.348549911Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.348669206Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=119.415µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.349476778Z level=info msg="Executing migration" id="create playlist table v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.350102317Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=625.319µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.351032259Z level=info msg="Executing migration" id="create playlist item table v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.351633393Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=601.063µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.353036537Z level=info msg="Executing migration" id="Update playlist table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.353113533Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=77.637µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.353910465Z level=info msg="Executing migration" id="Update playlist_item table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.353985738Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=75.793µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.354709762Z level=info msg="Executing migration" id="Add playlist column created_at"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.357011852Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.30213ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.357955311Z level=info msg="Executing migration" id="Add playlist column updated_at"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.360173382Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.218021ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.360927452Z level=info msg="Executing migration" id="drop preferences table v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.361049984Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=122.862µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.361794447Z level=info msg="Executing migration" id="drop preferences table v3"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.361914954Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=119.815µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.362668124Z level=info msg="Executing migration" id="create preferences table v3"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.363324391Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=656.047µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.364134939Z level=info msg="Executing migration" id="Update preferences table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.364186516Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=52.018µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.365094038Z level=info msg="Executing migration" id="Add column team_id in preferences"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.367221878Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.127681ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.368063404Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.368226852Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=163.799µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.369009448Z level=info msg="Executing migration" id="Add column week_start in preferences"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.371091112Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.081524ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.371812743Z level=info msg="Executing migration" id="Add column preferences.json_data"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.373858599Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.046478ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.374562025Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.374664467Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=102.813µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.375646169Z level=info msg="Executing migration" id="Add preferences index org_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.376438121Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=791.862µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.377341966Z level=info msg="Executing migration" id="Add preferences index user_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.378193531Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=851.365µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.379040437Z level=info msg="Executing migration" id="create alert table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.379950993Z level=info msg="Migration successfully executed" id="create alert table v1" duration=909.503µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.38072893Z level=info msg="Executing migration" id="add index alert org_id & id "
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.381504843Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=774.672µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.382320952Z level=info msg="Executing migration" id="add index alert state"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.38304726Z level=info msg="Migration successfully executed" id="add index alert state" duration=725.838µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.383861505Z level=info msg="Executing migration" id="add index alert dashboard_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.384785647Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=923.3µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.385685192Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.386249887Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=562.631µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.387021322Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.387696204Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=674.511µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.388386665Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.389049675Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=661.588µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.389685123Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.396061122Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=6.375648ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.396753327Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.397304706Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=551.288µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.397963708Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.398572186Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=608.186µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.399383435Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.399628418Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=244.851µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.400183433Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.400614745Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=431.162µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.401281032Z level=info msg="Executing migration" id="create alert_notification table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.401845286Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=564.004µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.402493948Z level=info msg="Executing migration" id="Add column is_default"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.404848097Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.353989ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.405668533Z level=info msg="Executing migration" id="Add column frequency"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.408250601Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.581927ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.409234735Z level=info msg="Executing migration" id="Add column send_reminder"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.411795433Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.560607ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.412528424Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.414752286Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.231145ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.415455733Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.416095167Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=639.394µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.416813051Z level=info msg="Executing migration" id="Update alert table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.416832047Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=19.456µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.417521957Z level=info msg="Executing migration" id="Update alert_notification table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.417537125Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=15.769µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.418066083Z level=info msg="Executing migration" id="create notification_journal table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.418610169Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=543.916µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.419324796Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.41996341Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=638.224µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.420620137Z level=info msg="Executing migration" id="drop alert_notification_journal"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.421280032Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=660.716µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.421925899Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.422500863Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=574.664µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.423144095Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.423765005Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=620.539µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.424387069Z level=info msg="Executing migration" id="Add for to alert table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.426823751Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.436392ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.427552124Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.42995824Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.405815ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.430571115Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.430712462Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=141.347µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.431357909Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.431994309Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=635.959µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.432716551Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.433347369Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=630.809µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.433999569Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.436363574Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.363855ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.437053185Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.437094723Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=41.869µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.437792307Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.438410313Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=615.942µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.438993533Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.439669627Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=675.694µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.440394643Z level=info msg="Executing migration" id="Drop old annotation table v4"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.440462341Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=67.888µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.441166929Z level=info msg="Executing migration" id="create annotation table v5"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.441799852Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=632.833µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.442643233Z level=info msg="Executing migration" id="add index annotation 0 v3"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.443259113Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=615.541µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.444105349Z level=info msg="Executing migration" id="add index annotation 1 v3"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.444709759Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=604.149µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.445677452Z level=info msg="Executing migration" id="add index annotation 2 v3"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.446292962Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=615.28µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.44728395Z level=info msg="Executing migration" id="add index annotation 3 v3"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.447977337Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=692.966µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.448847919Z level=info msg="Executing migration" id="add index annotation 4 v3"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.449548349Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=700.049µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.452213242Z level=info msg="Executing migration" id="Update annotation table charset"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.452232419Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=19.688µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.452961052Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.456000021Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.037606ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.456999856Z level=info msg="Executing migration" id="Drop category_id index"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.457676912Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=677.317µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.45831769Z level=info msg="Executing migration" id="Add column tags to annotation table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.460756477Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.438628ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.461445897Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.46197814Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=532.222µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.462583742Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.463231873Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=647.901µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.464298895Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.464944753Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=645.948µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.465647688Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.472601566Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=6.953517ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.473237123Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.473761453Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=524.219µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.474450351Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.475095197Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=645.607µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.476006404Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.476234243Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=227.669µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.476848762Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.477281969Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=433.137µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.477905364Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.478034527Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=144.592µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.478700703Z level=info msg="Executing migration" id="Add created time to annotation table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.481216565Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=2.51431ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.481944578Z level=info msg="Executing migration" id="Add updated time to annotation table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.484434081Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.489233ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.485078657Z level=info msg="Executing migration" id="Add index for created in annotation table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.48570111Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=622.053µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.486349862Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.486980322Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=630.95µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.487719624Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.487883804Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=164.21µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.488639178Z level=info msg="Executing migration" id="Add epoch_end column"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.491204043Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=2.564524ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.491850752Z level=info msg="Executing migration" id="Add index for epoch_end"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.492488414Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=637.612µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.493183133Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.493304543Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=121.4µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.49393957Z level=info msg="Executing migration" id="Move region to single row"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.494209619Z level=info msg="Migration successfully executed" id="Move region to single row" duration=271.252µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.494851298Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.495478069Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=626.55µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.496168401Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.496782408Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=612.915µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.497396827Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.49805145Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=654.603µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.498660608Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.499293983Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=632.913µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.499841696Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.50045425Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=612.404µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.501113994Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.501728402Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=612.685µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.502365102Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.502405729Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=40.968µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.50330849Z level=info msg="Executing migration" id="create test_data table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.503929481Z level=info msg="Migration successfully executed" id="create test_data table" duration=620.741µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.50466124Z level=info msg="Executing migration" id="create dashboard_version table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.505251784Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=590.463µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.505942936Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.507088856Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.145469ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.507803674Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.508660379Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=856.264µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.509481156Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.509622844Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=141.236µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.510324466Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.510694273Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=369.476µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.511306316Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.511347124Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=41.098µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.512014161Z level=info msg="Executing migration" id="create team table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.512543489Z level=info msg="Migration successfully executed" id="create team table" duration=529.259µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.51332376Z level=info msg="Executing migration" id="add index team.org_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.514092559Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=768.488µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.514848033Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.515490234Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=641.029µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.516227183Z level=info msg="Executing migration" id="Add column uid in team"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.519043903Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=2.81651ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.519720788Z level=info msg="Executing migration" id="Update uid column values in team"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.519846316Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=125.618µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.520493195Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.521182564Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=689.09µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.521940624Z level=info msg="Executing migration" id="create team member table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.522499998Z level=info msg="Migration successfully executed" id="create team member table" duration=559.084µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.523235514Z level=info msg="Executing migration" id="add index team_member.org_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.523848069Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=612.546µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.52477658Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.525413059Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=636.058µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.526137324Z level=info msg="Executing migration" id="add index team_member.team_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.526770779Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=633.015µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.527448667Z level=info msg="Executing migration" id="Add column email to team table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.530441488Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=2.99121ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.531149923Z level=info msg="Executing migration" id="Add column external to team_member table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.534166771Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.016398ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.534833688Z level=info msg="Executing migration" id="Add column permission to team_member table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.537646399Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=2.812571ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.538302857Z level=info msg="Executing migration" id="create dashboard acl table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.539026391Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=723.474µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.539860573Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.540524185Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=663.16µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.541255292Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.542014213Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=758.841µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.544376407Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.545053011Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=676.304µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.545725209Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.546367419Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=641.85µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.547023055Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.547660747Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=637.352µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.548419838Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.5490886Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=668.41µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.55001723Z level=info msg="Executing migration" id="add index dashboard_permission"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.550698845Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=681.465µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.551392863Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.551796804Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=404.583µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.552515059Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.552683666Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=168.458µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.553368006Z level=info msg="Executing migration" id="create tag table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.553975721Z level=info msg="Migration successfully executed" id="create tag table" duration=607.585µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.554711789Z level=info msg="Executing migration" id="add index tag.key_value"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.555362516Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=650.305µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.556060422Z level=info msg="Executing migration" id="create login attempt table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.556606872Z level=info msg="Migration successfully executed" id="create login attempt table" duration=544.918µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.557271073Z level=info msg="Executing migration" id="add index login_attempt.username"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.557928372Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=657.149µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.55862749Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.559274901Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=647.16µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.561060527Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.56963457Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=8.573663ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.570320011Z level=info msg="Executing migration" id="create login_attempt v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.570864398Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=544.196µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.571521728Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.572169789Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=647.861µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.572868455Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.573122825Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=254.289µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.573726953Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.57420773Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=480.496µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.574834061Z level=info msg="Executing migration" id="create user auth table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.575365513Z level=info msg="Migration successfully executed" id="create user auth table" duration=531.281µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.576057257Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.576710638Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=653.04µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.577432028Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.577475631Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=44.024µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.578249089Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.581444072Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.194532ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.582156194Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.585246428Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.088602ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.585951068Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.589010404Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.058816ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.589670889Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.592791192Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.120012ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.593452117Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.594121559Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=669.252µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.594855853Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.598033343Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.176368ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.598770844Z level=info msg="Executing migration" id="create server_lock table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.599359613Z level=info msg="Migration successfully executed" id="create server_lock table" duration=588.719µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.600129615Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.600771865Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=642.01µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.601508584Z level=info msg="Executing migration" id="create user auth token table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.60215401Z level=info msg="Migration successfully executed" id="create user auth token table" duration=645.205µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.602828192Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.603482937Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=654.615µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.604204446Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.604918843Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=714.266µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.605638629Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.606368505Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=729.645µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.607109482Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.610451402Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.340817ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.611116927Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.611786148Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=668.971µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.612469797Z level=info msg="Executing migration" id="create cache_data table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.613119783Z level=info msg="Migration successfully executed" id="create cache_data table" duration=649.835µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.613840581Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.614516145Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=675.723µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.615300894Z level=info msg="Executing migration" id="create short_url table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.616006594Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=705.5µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.61676294Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.617462008Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=698.738µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.618184138Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.61825923Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=77.717µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.619275536Z level=info msg="Executing migration" id="delete alert_definition table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.619352051Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=77.295µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.62039722Z level=info msg="Executing migration" id="recreate alert_definition table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.621312867Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=918.412µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.62221068Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.622955633Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=744.553µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.623714052Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.62463527Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=920.847µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.625517543Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.62556374Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=46.667µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.626207593Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.626934314Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=726.821µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.627519216Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.62821137Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=692.065µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.628944843Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.629647638Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=702.214µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.63025861Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.630983025Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=724.095µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.631567908Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.635244158Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=3.678124ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.635905304Z level=info msg="Executing migration" id="drop alert_definition table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.636688681Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=783.136µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.637290065Z level=info msg="Executing migration" id="delete alert_definition_version table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.637358734Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=69.091µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.638025571Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.638685405Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=659.744µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.639338034Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.640081627Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=743.602µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.640685766Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.641399261Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=713.104µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.642033947Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.642078451Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=45.116µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.642739186Z level=info msg="Executing migration" id="drop alert_definition_version table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.643506403Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=768.259µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.64421011Z level=info msg="Executing migration" id="create alert_instance table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.644909949Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=700.921µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.64550485Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.646246117Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=740.996µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.64684734Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.647555956Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=708.255µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.648353709Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.652050168Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=3.696368ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.652658315Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.653334369Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=675.954µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.654036061Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.654707047Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=669.583µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.655311626Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.674701467Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=19.38941ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.675530441Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.692911103Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=17.380663ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.693627985Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.694329687Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=701.452µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.695029326Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.695701334Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=671.677µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.696639763Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.700110896Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=3.471754ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.700832536Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.704292749Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=3.45868ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.705024969Z level=info msg="Executing migration" id="create alert_rule table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.70571496Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=689.93µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.706470304Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.707210539Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=739.975µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.707933622Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.708638711Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=705.028µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.709395789Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.710192621Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=797.713µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.710922998Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.710966931Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=44.394µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.711626614Z level=info msg="Executing migration" id="add column for to alert_rule"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.715365742Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=3.731303ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.71604358Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.71964568Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=3.60183ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.720332215Z level=info msg="Executing migration" id="add column labels to alert_rule"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.723941578Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=3.609173ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.724544345Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.725250396Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=705.961µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.726088135Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.727015593Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=927.069µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.727622077Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.73117827Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=3.555802ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.731843805Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.735647294Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=3.80321ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.736273335Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.73701863Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=750.706µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.737735341Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.741330157Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=3.594757ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.7420209Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.745601619Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=3.580379ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.746215877Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.746261312Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=45.897µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.746982562Z level=info msg="Executing migration" id="create alert_rule_version table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.747788541Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=805.769µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.748492478Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.74923037Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=737.539µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.749928165Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.750687116Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=758.761µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.751500979Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.751545504Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=46.408µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.752242057Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.756001043Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=3.758734ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.756640267Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.760395537Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=3.754879ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.761119692Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.76483758Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=3.717718ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.765438082Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.769241081Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=3.802789ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.769872251Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.773628642Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=3.756191ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.774263538Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.774309235Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=46.207µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.775003073Z level=info msg="Executing migration" id=create_alert_configuration_table
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.775549974Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=546.78µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.776229024Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.780127533Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=3.898239ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.780769142Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.780816391Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=46.037µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.781515419Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.785437843Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=3.922033ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.786084932Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.786796273Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=711.111µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.787512444Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.791398619Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=3.887368ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.792039338Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.792599803Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=560.406µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.793310403Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.79402464Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=713.886µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.794707476Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.798650169Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=3.942593ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.799314501Z level=info msg="Executing migration" id="create provenance_type table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.799880639Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=566.118µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.800622478Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.801335261Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=712.454µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.802024009Z level=info msg="Executing migration" id="create alert_image table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.802602349Z level=info msg="Migration successfully executed" id="create alert_image table" duration=578.209µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.803278674Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.80399275Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=713.796µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.804721845Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.804766449Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=44.864µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.805527774Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.806245487Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=717.632µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.806951067Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.807646809Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=696.714µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.808223745Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.808476603Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.809115687Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.809466228Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=350.091µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.810081768Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.810792558Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=710.419µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.81141439Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.815441461Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=4.02693ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.816091918Z level=info msg="Executing migration" id="create library_element table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.816877949Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=786.082µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.817584372Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.818349362Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=764.631µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.819043752Z level=info msg="Executing migration" id="create library_element_connection table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.819646258Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=602.326µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.820383578Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.82114346Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=756.615µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.821833341Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.82253851Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=704.738µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.823285038Z level=info msg="Executing migration" id="increase max description length to 2048"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.823303502Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=19.166µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.823932648Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.823976942Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=44.654µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.824613772Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.82481437Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=200.808µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.82547735Z level=info msg="Executing migration" id="create data_keys table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.826194282Z level=info msg="Migration successfully executed" id="create data_keys table" duration=716.862µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.826930489Z level=info msg="Executing migration" id="create secrets table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.827525901Z level=info msg="Migration successfully executed" id="create secrets table" duration=595.493µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.828259023Z level=info msg="Executing migration" id="rename data_keys name column to id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.851452144Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=23.193422ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.852377498Z level=info msg="Executing migration" id="add name column into data_keys"
Nov 25 09:34:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v19: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.857849023Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.471644ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.858774006Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.858939348Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=165.022µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.859726453Z level=info msg="Executing migration" id="rename data_keys name column to label"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.882965369Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=23.238757ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.883941007Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.907982928Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=24.039957ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.909020475Z level=info msg="Executing migration" id="create kv_store table v1"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.909887208Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=866.564µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.910817172Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.911857983Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.040601ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.913090426Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.913318798Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=228.321µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.914271052Z level=info msg="Executing migration" id="create permission table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.915005436Z level=info msg="Migration successfully executed" id="create permission table" duration=734.114µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.915940449Z level=info msg="Executing migration" id="add unique index permission.role_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.916731891Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=791.071µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.917702711Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.918521575Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=818.474µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.919429326Z level=info msg="Executing migration" id="create role table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.920124055Z level=info msg="Migration successfully executed" id="create role table" duration=692.714µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.921034121Z level=info msg="Executing migration" id="add column display_name"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.925700988Z level=info msg="Migration successfully executed" id="add column display_name" duration=4.666535ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.926443869Z level=info msg="Executing migration" id="add column group_name"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.930922872Z level=info msg="Migration successfully executed" id="add column group_name" duration=4.478652ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.931679569Z level=info msg="Executing migration" id="add index role.org_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.932492751Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=813.063µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.93337255Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.934443338Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.070617ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.935474362Z level=info msg="Executing migration" id="add index role_org_id_uid"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.936495637Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.020895ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.937470285Z level=info msg="Executing migration" id="create team role table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.93819491Z level=info msg="Migration successfully executed" id="create team role table" duration=724.506µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.939131936Z level=info msg="Executing migration" id="add index team_role.org_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.940000384Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=868.117µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.940868159Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.941759119Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=890.568µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.942724228Z level=info msg="Executing migration" id="add index team_role.team_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.943539765Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=815.236µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.944740759Z level=info msg="Executing migration" id="create user role table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.945384283Z level=info msg="Migration successfully executed" id="create user role table" duration=643.364µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.946336838Z level=info msg="Executing migration" id="add index user_role.org_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.947164347Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=828.121µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.948108017Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.948945545Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=837.248µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.949839711Z level=info msg="Executing migration" id="add index user_role.user_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.950553908Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=713.746µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.951452571Z level=info msg="Executing migration" id="create builtin role table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.952076068Z level=info msg="Migration successfully executed" id="create builtin role table" duration=622.884µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.953028502Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.953761564Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=732.781µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.954673313Z level=info msg="Executing migration" id="add index builtin_role.name"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.955413288Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=739.445µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.956737584Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.961805418Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.067633ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.962630823Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.963539226Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=907.1µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.964410308Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.965430781Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.020063ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.966245357Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.967188034Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=942.827µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.967953206Z level=info msg="Executing migration" id="add unique index role.uid"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.968842773Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=889.336µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.969613918Z level=info msg="Executing migration" id="create seed assignment table"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.970196104Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=581.966µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.970917194Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.971678198Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=760.492µs
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.972519966Z level=info msg="Executing migration" id="add column hidden to role table"
Nov 25 09:34:15 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:15 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:15 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:15 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:15 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:15 compute-0 ceph-mon[74207]: Deploying daemon haproxy.rgw.default.compute-0.jgcdmc on compute-0
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.980284822Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.763705ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.981075152Z level=info msg="Executing migration" id="permission kind migration"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.986105835Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.030353ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.986761551Z level=info msg="Executing migration" id="permission attribute migration"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.991626893Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=4.86512ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.992419096Z level=info msg="Executing migration" id="permission identifier migration"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.998197699Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.778263ms
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.999006244Z level=info msg="Executing migration" id="add permission identifier index"
Nov 25 09:34:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:15.999862928Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=856.424µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.000575111Z level=info msg="Executing migration" id="add permission action scope role_id index"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.001495336Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=919.724µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.002307868Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.003127873Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=819.895µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.003783579Z level=info msg="Executing migration" id="create query_history table v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.004735844Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=951.885µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.005377344Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.006269426Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=891.801µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.007088279Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.007162659Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=74.86µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.007847309Z level=info msg="Executing migration" id="rbac disabled migrator"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.007872638Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=25.96µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.008576294Z level=info msg="Executing migration" id="teams permissions migration"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.009001956Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=425.562µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.009631924Z level=info msg="Executing migration" id="dashboard permissions"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.010118781Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=487.298µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.011111242Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.011666078Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=555.115µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.012472789Z level=info msg="Executing migration" id="drop managed folder create actions"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.012652367Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=179.388µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.013748824Z level=info msg="Executing migration" id="alerting notification permissions"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.014359586Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=609.368µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.015097597Z level=info msg="Executing migration" id="create query_history_star table v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.015683231Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=585.513µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.01639887Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.01723176Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=833.47µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.017936598Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.022826717Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=4.889928ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.023514734Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.02357681Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=62.187µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.024312959Z level=info msg="Executing migration" id="create correlation table v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.025164463Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=851.074µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.026004216Z level=info msg="Executing migration" id="add index correlations.uid"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.026832458Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=827.751µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.027548307Z level=info msg="Executing migration" id="add index correlations.source_uid"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.028359146Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=810.969µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.029030261Z level=info msg="Executing migration" id="add correlation config column"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.034517756Z level=info msg="Migration successfully executed" id="add correlation config column" duration=5.4855ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.035277126Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.036092543Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=815.207µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.036742359Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.037788049Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.045681ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.038561117Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.054925014Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=16.363145ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.055856589Z level=info msg="Executing migration" id="create correlation v2"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.057007048Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.150149ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.057790235Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.058808986Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.01858ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.059638328Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.060545709Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=907.251µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.061300101Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.062127321Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=827.098µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.062940974Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.063142484Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=201.159µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.063783884Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.064481058Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=697.155µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.065159316Z level=info msg="Executing migration" id="add provisioning column"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.07029121Z level=info msg="Migration successfully executed" id="add provisioning column" duration=5.133348ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.070995568Z level=info msg="Executing migration" id="create entity_events table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.071596762Z level=info msg="Migration successfully executed" id="create entity_events table" duration=590.121µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.07223277Z level=info msg="Executing migration" id="create dashboard public config v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.073009826Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=776.934µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.073706288Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.074020751Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.074691736Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.07499143Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.075651124Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.076309105Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=657.67µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.077024703Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.077751994Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=727.141µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.078462434Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.079271629Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=808.795µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.079975165Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.081074117Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.09839ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.081876208Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.082931689Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.053867ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.083647518Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.084557262Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=909.484µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.085219872Z level=info msg="Executing migration" id="Drop public config table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.085909062Z level=info msg="Migration successfully executed" id="Drop public config table" duration=689.26µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.086524241Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.087344567Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=818.703µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.087932316Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.088683211Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=751.997µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.089353414Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.090192255Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=838.511µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.090797095Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.091553502Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=756.257µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.092316719Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.110494186Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=18.177837ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.111463904Z level=info msg="Executing migration" id="add annotations_enabled column"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.117874418Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.410805ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.118582683Z level=info msg="Executing migration" id="add time_selection_enabled column"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.123762908Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=5.179914ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.124544962Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.124715173Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=171.524µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.125433908Z level=info msg="Executing migration" id="add share column"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.130471734Z level=info msg="Migration successfully executed" id="add share column" duration=5.037666ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.131141517Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.131294997Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=153.28µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.131997631Z level=info msg="Executing migration" id="create file table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.132669357Z level=info msg="Migration successfully executed" id="create file table" duration=671.605µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.133349148Z level=info msg="Executing migration" id="file table idx: path natural pk"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.134120403Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=771.034µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.134831784Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.135662419Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=828.912µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.13650608Z level=info msg="Executing migration" id="create file_meta table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.137278846Z level=info msg="Migration successfully executed" id="create file_meta table" duration=771.654µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.13809248Z level=info msg="Executing migration" id="file table idx: path key"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.139086865Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=993.984µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.139934122Z level=info msg="Executing migration" id="set path collation in file table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.140002621Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=68.83µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.140763065Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.140817327Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=45.255µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.141551891Z level=info msg="Executing migration" id="managed permissions migration"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.141961243Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=409.412µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.142632769Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.142789424Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=156.756µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.143452594Z level=info msg="Executing migration" id="RBAC action name migrator"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.144487886Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.03491ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.145265672Z level=info msg="Executing migration" id="Add UID column to playlist"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.151072328Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=5.807738ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.151727193Z level=info msg="Executing migration" id="Update uid column values in playlist"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.151850665Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=123.693µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.152456778Z level=info msg="Executing migration" id="Add index for uid in playlist"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.153320065Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=862.965µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.154262272Z level=info msg="Executing migration" id="update group index for alert rules"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.154523894Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=263.706µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.155259071Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.155406448Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=147.507µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.156133169Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.156454284Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=321.566µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.157069854Z level=info msg="Executing migration" id="add action column to seed_assignment"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.164331361Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=7.261198ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.165122924Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.171439982Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.316978ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.172090268Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.172841474Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=751.045µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.173465832Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.234101183Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=60.63471ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.235023661Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.236103698Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.079786ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.237034883Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.23816837Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.133166ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.239269605Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.258685055Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=19.413836ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.259694989Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.266039689Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.344099ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.266927984Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.267155502Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=225.756µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.268074906Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.268213567Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=138.601µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.268887197Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.269060564Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=173.136µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.269715388Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.269867235Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=151.847µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.270576061Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.270741292Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=163.228µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.271456821Z level=info msg="Executing migration" id="create folder table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.272130862Z level=info msg="Migration successfully executed" id="create folder table" duration=673.851µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.272751251Z level=info msg="Executing migration" id="Add index for parent_uid"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.273676876Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=925.214µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.274537899Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.275418078Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=879.838µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.276108459Z level=info msg="Executing migration" id="Update folder title length"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.276127945Z level=info msg="Migration successfully executed" id="Update folder title length" duration=19.977µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.276801065Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.277639937Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=838.512µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.278316251Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.279119235Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=802.703µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.279808995Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.280805734Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=996.478µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.281629156Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.281992051Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=361.381µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.28258071Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.282791748Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=210.807µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.283432566Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.28423044Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=797.292µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.284856811Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.28568953Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=834.063µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.286367569Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.287150193Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=781.984µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.28778496Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.288582814Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=797.343µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.289238591Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.290016387Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=778.928µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.290616268Z level=info msg="Executing migration" id="create anon_device table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.291258088Z level=info msg="Migration successfully executed" id="create anon_device table" duration=641.76µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.291877355Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.292733779Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=855.983µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.293553324Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.294339275Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=785.57µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.295086204Z level=info msg="Executing migration" id="create signing_key table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.295767659Z level=info msg="Migration successfully executed" id="create signing_key table" duration=681.455µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.296518543Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.297317069Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=798.245µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.298013813Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.298767703Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=753.73µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.299472983Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.299717073Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=243.669µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.300620877Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.30627242Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=5.651363ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.307007937Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.307599202Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=591.545µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.308273853Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.309085913Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=811.75µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.309797686Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.31061117Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=813.104µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.311282645Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.312082682Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=799.817µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.312756543Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.313579605Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=822.69µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.314242214Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.315047773Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=805.268µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.315692738Z level=info msg="Executing migration" id="create sso_setting table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.316449004Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=756.426µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.317432228Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.318025586Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=593.839µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.31868027Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.318877602Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=199.546µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.319641412Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.319687829Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=46.687µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.320428837Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.326108562Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=5.679355ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.327763142Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.335212294Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.449002ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.335939715Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.33621834Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=278.385µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=migrator t=2025-11-25T09:34:16.336853378Z level=info msg="migrations completed" performed=547 skipped=0 duration=1.253328898s
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=sqlstore t=2025-11-25T09:34:16.337854405Z level=info msg="Created default organization"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=secrets t=2025-11-25T09:34:16.33871139Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=plugin.store t=2025-11-25T09:34:16.352082886Z level=info msg="Loading plugins..."
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=local.finder t=2025-11-25T09:34:16.413013174Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=plugin.store t=2025-11-25T09:34:16.413042489Z level=info msg="Plugins loaded" count=55 duration=60.960393ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=query_data t=2025-11-25T09:34:16.415385717Z level=info msg="Query Service initialization"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=live.push_http t=2025-11-25T09:34:16.418065878Z level=info msg="Live Push Gateway initialization"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=ngalert.migration t=2025-11-25T09:34:16.419537623Z level=info msg=Starting
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=ngalert.migration t=2025-11-25T09:34:16.419857836Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=ngalert.migration orgID=1 t=2025-11-25T09:34:16.420242141Z level=info msg="Migrating alerts for organisation"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=ngalert.migration orgID=1 t=2025-11-25T09:34:16.420955846Z level=info msg="Alerts found to migrate" alerts=0
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=ngalert.migration t=2025-11-25T09:34:16.422315059Z level=info msg="Completed alerting migration"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=ngalert.state.manager t=2025-11-25T09:34:16.43564176Z level=info msg="Running in alternative execution of Error/NoData mode"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=infra.usagestats.collector t=2025-11-25T09:34:16.437397499Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=provisioning.datasources t=2025-11-25T09:34:16.43832559Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=provisioning.alerting t=2025-11-25T09:34:16.446405881Z level=info msg="starting to provision alerting"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=provisioning.alerting t=2025-11-25T09:34:16.446422272Z level=info msg="finished to provision alerting"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=http.server t=2025-11-25T09:34:16.448249597Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=http.server t=2025-11-25T09:34:16.448539965Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=ngalert.state.manager t=2025-11-25T09:34:16.456873724Z level=info msg="Warming state cache for startup"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=ngalert.state.manager t=2025-11-25T09:34:16.457088239Z level=info msg="State cache has been initialized" states=0 duration=213.743µs
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=ngalert.multiorg.alertmanager t=2025-11-25T09:34:16.457266465Z level=info msg="Starting MultiOrg Alertmanager"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=ngalert.scheduler t=2025-11-25T09:34:16.457286201Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=ticker t=2025-11-25T09:34:16.457318853Z level=info msg=starting first_tick=2025-11-25T09:34:20Z
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=grafanaStorageLogger t=2025-11-25T09:34:16.457645669Z level=info msg="Storage starting"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=provisioning.dashboard t=2025-11-25T09:34:16.500968636Z level=info msg="starting to provision dashboards"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=grafana.update.checker t=2025-11-25T09:34:16.517783213Z level=info msg="Update check succeeded" duration=60.444073ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=plugins.update.checker t=2025-11-25T09:34:16.553653019Z level=info msg="Update check succeeded" duration=95.340454ms
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=provisioning.dashboard t=2025-11-25T09:34:16.666219395Z level=info msg="finished to provision dashboards"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=grafana-apiserver t=2025-11-25T09:34:16.672650016Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Nov 25 09:34:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=grafana-apiserver t=2025-11-25T09:34:16.673012951Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Nov 25 09:34:16 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 10 completed events
Nov 25 09:34:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:34:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:17 compute-0 ceph-mon[74207]: pgmap v19: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 09:34:17 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[96479]: ts=2025-11-25T09:34:17.434Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003593223s
Nov 25 09:34:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v20: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 09:34:19 compute-0 ceph-mon[74207]: pgmap v20: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 09:34:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v21: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 09:34:20 compute-0 podman[97326]: 2025-11-25 09:34:20.326346486 +0000 UTC m=+4.964135575 container create 1b8e43beb27a104c46dc0111762e42767deae98a8bb6d34dac0e03c4e7aaf18a (image=quay.io/ceph/haproxy:2.3, name=bold_shirley)
Nov 25 09:34:20 compute-0 systemd[1]: Started libpod-conmon-1b8e43beb27a104c46dc0111762e42767deae98a8bb6d34dac0e03c4e7aaf18a.scope.
Nov 25 09:34:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:20 compute-0 podman[97326]: 2025-11-25 09:34:20.374968722 +0000 UTC m=+5.012757821 container init 1b8e43beb27a104c46dc0111762e42767deae98a8bb6d34dac0e03c4e7aaf18a (image=quay.io/ceph/haproxy:2.3, name=bold_shirley)
Nov 25 09:34:20 compute-0 podman[97326]: 2025-11-25 09:34:20.379650848 +0000 UTC m=+5.017439927 container start 1b8e43beb27a104c46dc0111762e42767deae98a8bb6d34dac0e03c4e7aaf18a (image=quay.io/ceph/haproxy:2.3, name=bold_shirley)
Nov 25 09:34:20 compute-0 podman[97326]: 2025-11-25 09:34:20.380840701 +0000 UTC m=+5.018629790 container attach 1b8e43beb27a104c46dc0111762e42767deae98a8bb6d34dac0e03c4e7aaf18a (image=quay.io/ceph/haproxy:2.3, name=bold_shirley)
Nov 25 09:34:20 compute-0 bold_shirley[97432]: 0 0
Nov 25 09:34:20 compute-0 systemd[1]: libpod-1b8e43beb27a104c46dc0111762e42767deae98a8bb6d34dac0e03c4e7aaf18a.scope: Deactivated successfully.
Nov 25 09:34:20 compute-0 conmon[97432]: conmon 1b8e43beb27a104c46dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b8e43beb27a104c46dc0111762e42767deae98a8bb6d34dac0e03c4e7aaf18a.scope/container/memory.events
Nov 25 09:34:20 compute-0 podman[97326]: 2025-11-25 09:34:20.383885912 +0000 UTC m=+5.021675001 container died 1b8e43beb27a104c46dc0111762e42767deae98a8bb6d34dac0e03c4e7aaf18a (image=quay.io/ceph/haproxy:2.3, name=bold_shirley)
Nov 25 09:34:20 compute-0 podman[97326]: 2025-11-25 09:34:20.316339913 +0000 UTC m=+4.954129012 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 25 09:34:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-73a3e975b8479837347eb9ee186c462904d2dada361a6176090a2aae7c77ff40-merged.mount: Deactivated successfully.
Nov 25 09:34:20 compute-0 podman[97326]: 2025-11-25 09:34:20.401366233 +0000 UTC m=+5.039155313 container remove 1b8e43beb27a104c46dc0111762e42767deae98a8bb6d34dac0e03c4e7aaf18a (image=quay.io/ceph/haproxy:2.3, name=bold_shirley)
Nov 25 09:34:20 compute-0 systemd[1]: libpod-conmon-1b8e43beb27a104c46dc0111762e42767deae98a8bb6d34dac0e03c4e7aaf18a.scope: Deactivated successfully.
Nov 25 09:34:20 compute-0 systemd[1]: Reloading.
Nov 25 09:34:20 compute-0 systemd-sysv-generator[97476]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:20 compute-0 systemd-rc-local-generator[97473]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:20 compute-0 systemd[1]: Reloading.
Nov 25 09:34:20 compute-0 systemd-rc-local-generator[97514]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:20 compute-0 systemd-sysv-generator[97517]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:20 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.jgcdmc for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:34:20 compute-0 podman[97568]: 2025-11-25 09:34:20.994762014 +0000 UTC m=+0.027812580 container create e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:34:21 compute-0 ceph-mon[74207]: pgmap v21: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 09:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3da7bb9163464993cfd60e9608c9a4a39bb8b1206b49611ec036c9468ca821/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:21 compute-0 podman[97568]: 2025-11-25 09:34:21.036189837 +0000 UTC m=+0.069240394 container init e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:34:21 compute-0 podman[97568]: 2025-11-25 09:34:21.039878992 +0000 UTC m=+0.072929548 container start e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:34:21 compute-0 bash[97568]: e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd
Nov 25 09:34:21 compute-0 podman[97568]: 2025-11-25 09:34:20.983357545 +0000 UTC m=+0.016408122 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 25 09:34:21 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.jgcdmc for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:34:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc[97580]: [NOTICE] 328/093421 (2) : New worker #1 (4) forked
Nov 25 09:34:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:34:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:21.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:34:21 compute-0 sudo[97265]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:34:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:34:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 25 09:34:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:21 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.jrahab on compute-2
Nov 25 09:34:21 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.jrahab on compute-2
Nov 25 09:34:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v22: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:21 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:34:21 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:34:21 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:34:21 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:34:22 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:34:22 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:34:22 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:22 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:22 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:22 compute-0 ceph-mon[74207]: Deploying daemon haproxy.rgw.default.compute-2.jrahab on compute-2
Nov 25 09:34:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:34:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:23.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:34:23 compute-0 ceph-mon[74207]: pgmap v22: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v23: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:24.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:34:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:34:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 25 09:34:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Nov 25 09:34:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:24 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:34:24 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:34:24 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:34:24 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:34:24 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.aswfow on compute-2
Nov 25 09:34:24 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.aswfow on compute-2
Nov 25 09:34:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:34:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:25.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:34:25 compute-0 ceph-mon[74207]: pgmap v23: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:25 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:25 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:25 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:25 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:25 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:34:25 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:34:25 compute-0 ceph-mon[74207]: Deploying daemon keepalived.rgw.default.compute-2.aswfow on compute-2
Nov 25 09:34:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v24: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:26.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:27.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:27 compute-0 ceph-mon[74207]: pgmap v24: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v25: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:34:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:34:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:34:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:29.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:34:29 compute-0 ceph-mon[74207]: pgmap v25: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:34:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:34:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 25 09:34:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:29 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:34:29 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:34:29 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:34:29 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:34:29 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.ulmpfs on compute-0
Nov 25 09:34:29 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.ulmpfs on compute-0
Nov 25 09:34:29 compute-0 sudo[97590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:29 compute-0 sudo[97590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:29 compute-0 sudo[97590]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:29 compute-0 sudo[97615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:29 compute-0 sudo[97615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v26: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:30 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:30 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:30 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:31.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:31 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:34:31 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:34:31 compute-0 ceph-mon[74207]: Deploying daemon keepalived.rgw.default.compute-0.ulmpfs on compute-0
Nov 25 09:34:31 compute-0 ceph-mon[74207]: pgmap v26: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v27: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:32.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:32 compute-0 ceph-mon[74207]: pgmap v27: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:32 compute-0 podman[97673]: 2025-11-25 09:34:32.901673782 +0000 UTC m=+2.819519226 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 25 09:34:32 compute-0 podman[97673]: 2025-11-25 09:34:32.911133727 +0000 UTC m=+2.828979171 container create 15569ff17d60393ab5fd6369338b0f59ca135e90111ef1ae410798e437a0c4d7 (image=quay.io/ceph/keepalived:2.2.4, name=kind_jones, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.28.2, description=keepalived for Ceph, version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 25 09:34:32 compute-0 systemd[1]: Started libpod-conmon-15569ff17d60393ab5fd6369338b0f59ca135e90111ef1ae410798e437a0c4d7.scope.
Nov 25 09:34:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:32 compute-0 podman[97673]: 2025-11-25 09:34:32.963137674 +0000 UTC m=+2.880983128 container init 15569ff17d60393ab5fd6369338b0f59ca135e90111ef1ae410798e437a0c4d7 (image=quay.io/ceph/keepalived:2.2.4, name=kind_jones, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, name=keepalived, release=1793, architecture=x86_64, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 25 09:34:32 compute-0 podman[97673]: 2025-11-25 09:34:32.967436319 +0000 UTC m=+2.885281753 container start 15569ff17d60393ab5fd6369338b0f59ca135e90111ef1ae410798e437a0c4d7 (image=quay.io/ceph/keepalived:2.2.4, name=kind_jones, architecture=x86_64, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=)
Nov 25 09:34:32 compute-0 podman[97673]: 2025-11-25 09:34:32.968377403 +0000 UTC m=+2.886222847 container attach 15569ff17d60393ab5fd6369338b0f59ca135e90111ef1ae410798e437a0c4d7 (image=quay.io/ceph/keepalived:2.2.4, name=kind_jones, architecture=x86_64, distribution-scope=public, io.buildah.version=1.28.2, vendor=Red Hat, Inc., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-type=git, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived)
Nov 25 09:34:32 compute-0 kind_jones[97754]: 0 0
Nov 25 09:34:32 compute-0 systemd[1]: libpod-15569ff17d60393ab5fd6369338b0f59ca135e90111ef1ae410798e437a0c4d7.scope: Deactivated successfully.
Nov 25 09:34:32 compute-0 podman[97673]: 2025-11-25 09:34:32.971323447 +0000 UTC m=+2.889168892 container died 15569ff17d60393ab5fd6369338b0f59ca135e90111ef1ae410798e437a0c4d7 (image=quay.io/ceph/keepalived:2.2.4, name=kind_jones, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, version=2.2.4, release=1793, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, distribution-scope=public)
Nov 25 09:34:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4df7f6479418557d2ebe86b01c42c5db6847f384d7119c20a425053a4937600b-merged.mount: Deactivated successfully.
Nov 25 09:34:32 compute-0 podman[97673]: 2025-11-25 09:34:32.987735831 +0000 UTC m=+2.905581275 container remove 15569ff17d60393ab5fd6369338b0f59ca135e90111ef1ae410798e437a0c4d7 (image=quay.io/ceph/keepalived:2.2.4, name=kind_jones, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, vcs-type=git, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=2.2.4, release=1793)
Nov 25 09:34:33 compute-0 systemd[1]: libpod-conmon-15569ff17d60393ab5fd6369338b0f59ca135e90111ef1ae410798e437a0c4d7.scope: Deactivated successfully.
Nov 25 09:34:33 compute-0 systemd[1]: Reloading.
Nov 25 09:34:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:33.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:33 compute-0 systemd-sysv-generator[97801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:33 compute-0 systemd-rc-local-generator[97797]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:33 compute-0 systemd[1]: Reloading.
Nov 25 09:34:33 compute-0 systemd-sysv-generator[97838]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:33 compute-0 systemd-rc-local-generator[97834]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:33 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.ulmpfs for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:34:33 compute-0 podman[97892]: 2025-11-25 09:34:33.586715361 +0000 UTC m=+0.028709970 container create 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, architecture=x86_64, version=2.2.4, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, release=1793, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container)
Nov 25 09:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33b3ec58f50d480b9d0ce7c9630e96790bca71855c4fc10cf9dfd82aa9e14e6/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:33 compute-0 podman[97892]: 2025-11-25 09:34:33.624948104 +0000 UTC m=+0.066942723 container init 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.28.2, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20)
Nov 25 09:34:33 compute-0 podman[97892]: 2025-11-25 09:34:33.628526821 +0000 UTC m=+0.070521431 container start 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, version=2.2.4, release=1793)
Nov 25 09:34:33 compute-0 bash[97892]: 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2
Nov 25 09:34:33 compute-0 podman[97892]: 2025-11-25 09:34:33.574700849 +0000 UTC m=+0.016695468 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 25 09:34:33 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.ulmpfs for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:34:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:34:33 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 25 09:34:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:34:33 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 25 09:34:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:34:33 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 25 09:34:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:34:33 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 25 09:34:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:34:33 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 25 09:34:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:34:33 2025: Starting VRRP child process, pid=4
Nov 25 09:34:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:34:33 2025: Startup complete
Nov 25 09:34:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:34:33 2025: (VI_0) Entering BACKUP STATE (init)
Nov 25 09:34:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:34:33 2025: VRRP_Script(check_backend) succeeded
Nov 25 09:34:33 compute-0 sudo[97615]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:34:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:34:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 25 09:34:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:33 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev c67d856d-13e5-4608-8320-1334780a23e9 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 25 09:34:33 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event c67d856d-13e5-4608-8320-1334780a23e9 (Updating ingress.rgw.default deployment (+4 -> 4)) in 19 seconds
Nov 25 09:34:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 25 09:34:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:33 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 8208bb2a-d8cd-4096-99b6-139dffa3180e (Updating prometheus deployment (+1 -> 1))
Nov 25 09:34:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v28: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:33 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Nov 25 09:34:33 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Nov 25 09:34:33 compute-0 sudo[97912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:33 compute-0 sudo[97912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:33 compute-0 sudo[97912]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:34 compute-0 sudo[97937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:34 compute-0 sudo[97937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:34:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:34.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:34:34 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:34 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:34 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:34 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:35.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:35 compute-0 ceph-mon[74207]: pgmap v28: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:35 compute-0 ceph-mon[74207]: Deploying daemon prometheus.compute-0 on compute-0
Nov 25 09:34:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v29: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:36.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:36 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 11 completed events
Nov 25 09:34:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:34:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:37.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:34:37 2025: (VI_0) Entering MASTER STATE
Nov 25 09:34:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:37 compute-0 podman[97996]: 2025-11-25 09:34:37.643487071 +0000 UTC m=+3.371691268 volume create d6c289815ce83f5cd944e2bf393630fe7ae672cb03049bdda3be3d26757b7de9
Nov 25 09:34:37 compute-0 podman[97996]: 2025-11-25 09:34:37.648504621 +0000 UTC m=+3.376708819 container create a27d8fdb03a58bfe35bfdf7e0e3b338d8aa5b71032765d8cbe873e37a8b55aae (image=quay.io/prometheus/prometheus:v2.51.0, name=affectionate_williamson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:34:37.648647) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277648709, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 5722, "num_deletes": 252, "total_data_size": 11408760, "memory_usage": 12116592, "flush_reason": "Manual Compaction"}
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 25 09:34:37 compute-0 systemd[1]: Started libpod-conmon-a27d8fdb03a58bfe35bfdf7e0e3b338d8aa5b71032765d8cbe873e37a8b55aae.scope.
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277672836, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 10019037, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 122, "largest_seqno": 5839, "table_properties": {"data_size": 9999246, "index_size": 12381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6341, "raw_key_size": 60056, "raw_average_key_size": 23, "raw_value_size": 9950686, "raw_average_value_size": 3964, "num_data_blocks": 548, "num_entries": 2510, "num_filter_entries": 2510, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063077, "oldest_key_time": 1764063077, "file_creation_time": 1764063277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 24399 microseconds, and 14074 cpu microseconds.
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:34:37.673049) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 10019037 bytes OK
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:34:37.673137) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:34:37.677707) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:34:37.677731) EVENT_LOG_v1 {"time_micros": 1764063277677727, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:34:37.677745) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 11383478, prev total WAL file size 11383478, number of live WAL files 2.
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:34:37.680282) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(9784KB) 13(45KB) 8(1944B)]
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277680356, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 10067689, "oldest_snapshot_seqno": -1}
Nov 25 09:34:37 compute-0 podman[97996]: 2025-11-25 09:34:37.626321607 +0000 UTC m=+3.354525824 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Nov 25 09:34:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75e322741294b3b6bc83db1e3a963d676ec33915d4a5a8c3b4682680c10742f7/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2316 keys, 10049763 bytes, temperature: kUnknown
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277700365, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 10049763, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10030392, "index_size": 12478, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 5829, "raw_key_size": 58567, "raw_average_key_size": 25, "raw_value_size": 9983618, "raw_average_value_size": 4310, "num_data_blocks": 554, "num_entries": 2316, "num_filter_entries": 2316, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764063277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:34:37.700560) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 10049763 bytes
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:34:37.701098) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 500.5 rd, 499.6 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(9.6, 0.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2603, records dropped: 287 output_compression: NoCompression
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:34:37.701112) EVENT_LOG_v1 {"time_micros": 1764063277701105, "job": 4, "event": "compaction_finished", "compaction_time_micros": 20114, "compaction_time_cpu_micros": 13386, "output_level": 6, "num_output_files": 1, "total_output_size": 10049763, "num_input_records": 2603, "num_output_records": 2316, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:34:37 compute-0 podman[97996]: 2025-11-25 09:34:37.702546871 +0000 UTC m=+3.430751088 container init a27d8fdb03a58bfe35bfdf7e0e3b338d8aa5b71032765d8cbe873e37a8b55aae (image=quay.io/prometheus/prometheus:v2.51.0, name=affectionate_williamson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277702771, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277702827, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277702864, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 25 09:34:37 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:34:37.679700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:34:37 compute-0 ceph-mon[74207]: pgmap v29: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:37 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:37 compute-0 podman[97996]: 2025-11-25 09:34:37.707205184 +0000 UTC m=+3.435409381 container start a27d8fdb03a58bfe35bfdf7e0e3b338d8aa5b71032765d8cbe873e37a8b55aae (image=quay.io/prometheus/prometheus:v2.51.0, name=affectionate_williamson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:37 compute-0 podman[97996]: 2025-11-25 09:34:37.708133564 +0000 UTC m=+3.436337761 container attach a27d8fdb03a58bfe35bfdf7e0e3b338d8aa5b71032765d8cbe873e37a8b55aae (image=quay.io/prometheus/prometheus:v2.51.0, name=affectionate_williamson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:37 compute-0 affectionate_williamson[98213]: 65534 65534
Nov 25 09:34:37 compute-0 systemd[1]: libpod-a27d8fdb03a58bfe35bfdf7e0e3b338d8aa5b71032765d8cbe873e37a8b55aae.scope: Deactivated successfully.
Nov 25 09:34:37 compute-0 podman[97996]: 2025-11-25 09:34:37.710028537 +0000 UTC m=+3.438232734 container died a27d8fdb03a58bfe35bfdf7e0e3b338d8aa5b71032765d8cbe873e37a8b55aae (image=quay.io/prometheus/prometheus:v2.51.0, name=affectionate_williamson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-75e322741294b3b6bc83db1e3a963d676ec33915d4a5a8c3b4682680c10742f7-merged.mount: Deactivated successfully.
Nov 25 09:34:37 compute-0 podman[97996]: 2025-11-25 09:34:37.726480354 +0000 UTC m=+3.454684552 container remove a27d8fdb03a58bfe35bfdf7e0e3b338d8aa5b71032765d8cbe873e37a8b55aae (image=quay.io/prometheus/prometheus:v2.51.0, name=affectionate_williamson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:37 compute-0 podman[97996]: 2025-11-25 09:34:37.728082275 +0000 UTC m=+3.456286482 volume remove d6c289815ce83f5cd944e2bf393630fe7ae672cb03049bdda3be3d26757b7de9
Nov 25 09:34:37 compute-0 systemd[1]: libpod-conmon-a27d8fdb03a58bfe35bfdf7e0e3b338d8aa5b71032765d8cbe873e37a8b55aae.scope: Deactivated successfully.
Nov 25 09:34:37 compute-0 podman[98228]: 2025-11-25 09:34:37.787176399 +0000 UTC m=+0.027829631 volume create a658c3b949167a21017c22a1a2ab133d960c324327ec47dbf6b1eca4eb3cc7b0
Nov 25 09:34:37 compute-0 podman[98228]: 2025-11-25 09:34:37.792699353 +0000 UTC m=+0.033352594 container create c5bcb14e269bdc39154834643e2896b7d2c164fbb976bfbd6a1c965b2a97d8c8 (image=quay.io/prometheus/prometheus:v2.51.0, name=admiring_ganguly, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:37 compute-0 systemd[1]: Started libpod-conmon-c5bcb14e269bdc39154834643e2896b7d2c164fbb976bfbd6a1c965b2a97d8c8.scope.
Nov 25 09:34:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4d6446609f21e7a2fa4e36cbc0f6ae44885a1afbf0ecb94ff6d279489cdff3/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:37 compute-0 podman[98228]: 2025-11-25 09:34:37.84552587 +0000 UTC m=+0.086179121 container init c5bcb14e269bdc39154834643e2896b7d2c164fbb976bfbd6a1c965b2a97d8c8 (image=quay.io/prometheus/prometheus:v2.51.0, name=admiring_ganguly, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:37 compute-0 podman[98228]: 2025-11-25 09:34:37.849617584 +0000 UTC m=+0.090270815 container start c5bcb14e269bdc39154834643e2896b7d2c164fbb976bfbd6a1c965b2a97d8c8 (image=quay.io/prometheus/prometheus:v2.51.0, name=admiring_ganguly, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:37 compute-0 admiring_ganguly[98242]: 65534 65534
Nov 25 09:34:37 compute-0 podman[98228]: 2025-11-25 09:34:37.850820022 +0000 UTC m=+0.091473272 container attach c5bcb14e269bdc39154834643e2896b7d2c164fbb976bfbd6a1c965b2a97d8c8 (image=quay.io/prometheus/prometheus:v2.51.0, name=admiring_ganguly, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:37 compute-0 systemd[1]: libpod-c5bcb14e269bdc39154834643e2896b7d2c164fbb976bfbd6a1c965b2a97d8c8.scope: Deactivated successfully.
Nov 25 09:34:37 compute-0 conmon[98242]: conmon c5bcb14e269bdc391548 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5bcb14e269bdc39154834643e2896b7d2c164fbb976bfbd6a1c965b2a97d8c8.scope/container/memory.events
Nov 25 09:34:37 compute-0 podman[98228]: 2025-11-25 09:34:37.852183852 +0000 UTC m=+0.092837094 container died c5bcb14e269bdc39154834643e2896b7d2c164fbb976bfbd6a1c965b2a97d8c8 (image=quay.io/prometheus/prometheus:v2.51.0, name=admiring_ganguly, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v30: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b4d6446609f21e7a2fa4e36cbc0f6ae44885a1afbf0ecb94ff6d279489cdff3-merged.mount: Deactivated successfully.
Nov 25 09:34:37 compute-0 podman[98228]: 2025-11-25 09:34:37.871410212 +0000 UTC m=+0.112063443 container remove c5bcb14e269bdc39154834643e2896b7d2c164fbb976bfbd6a1c965b2a97d8c8 (image=quay.io/prometheus/prometheus:v2.51.0, name=admiring_ganguly, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:37 compute-0 podman[98228]: 2025-11-25 09:34:37.776977521 +0000 UTC m=+0.017630752 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Nov 25 09:34:37 compute-0 podman[98228]: 2025-11-25 09:34:37.874127344 +0000 UTC m=+0.114780576 volume remove a658c3b949167a21017c22a1a2ab133d960c324327ec47dbf6b1eca4eb3cc7b0
Nov 25 09:34:37 compute-0 systemd[1]: libpod-conmon-c5bcb14e269bdc39154834643e2896b7d2c164fbb976bfbd6a1c965b2a97d8c8.scope: Deactivated successfully.
Nov 25 09:34:37 compute-0 systemd[1]: Reloading.
Nov 25 09:34:37 compute-0 systemd-rc-local-generator[98277]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:37 compute-0 systemd-sysv-generator[98281]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:38 compute-0 systemd[1]: Reloading.
Nov 25 09:34:38 compute-0 systemd-rc-local-generator[98317]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:38 compute-0 systemd-sysv-generator[98320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:38 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:34:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:38.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:38 compute-0 podman[98372]: 2025-11-25 09:34:38.499356014 +0000 UTC m=+0.032553466 container create 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d7cb254b70729b97b17a367f783f632d5e657122c2ee7d0c22aa21ca7b1d34/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d7cb254b70729b97b17a367f783f632d5e657122c2ee7d0c22aa21ca7b1d34/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:38 compute-0 podman[98372]: 2025-11-25 09:34:38.538347528 +0000 UTC m=+0.071544980 container init 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:38 compute-0 podman[98372]: 2025-11-25 09:34:38.542126291 +0000 UTC m=+0.075323744 container start 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:38 compute-0 bash[98372]: 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd
Nov 25 09:34:38 compute-0 podman[98372]: 2025-11-25 09:34:38.484791265 +0000 UTC m=+0.017988727 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Nov 25 09:34:38 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.565Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.565Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.565Z caller=main.go:623 level=info host_details="(Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 x86_64 compute-0 (none))"
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.565Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.565Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.567Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.568Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.570Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.570Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.571Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.572Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.204µs
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.572Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.572Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.572Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=30.647µs wal_replay_duration=388.051µs wbl_replay_duration=140ns total_replay_duration=488.722µs
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.574Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.574Z caller=main.go:1153 level=info msg="TSDB started"
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.574Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Nov 25 09:34:38 compute-0 sudo[97937]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:34:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:34:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 25 09:34:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.598Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=23.965084ms db_storage=1.053µs remote_storage=1.343µs web_handler=481ns query_engine=791ns scrape=3.549813ms scrape_sd=184.589µs notify=14.507µs notify_sd=9.378µs rules=19.790843ms tracing=4.62µs
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.598Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Nov 25 09:34:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0[98384]: ts=2025-11-25T09:34:38.598Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Nov 25 09:34:38 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev 8208bb2a-d8cd-4096-99b6-139dffa3180e (Updating prometheus deployment (+1 -> 1))
Nov 25 09:34:38 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 8208bb2a-d8cd-4096-99b6-139dffa3180e (Updating prometheus deployment (+1 -> 1)) in 5 seconds
Nov 25 09:34:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Nov 25 09:34:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 25 09:34:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:39 compute-0 ceph-mon[74207]: pgmap v30: 12 pgs: 12 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:39 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:39 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:39 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:39 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 25 09:34:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 25 09:34:39 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.zcfgby(active, since 47s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:34:39 compute-0 sshd-session[92842]: Connection closed by 192.168.122.100 port 58450
Nov 25 09:34:39 compute-0 sshd-session[92812]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 09:34:39 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 25 09:34:39 compute-0 systemd[1]: session-35.scope: Consumed 33.396s CPU time.
Nov 25 09:34:39 compute-0 systemd-logind[744]: Session 35 logged out. Waiting for processes to exit.
Nov 25 09:34:39 compute-0 systemd-logind[744]: Removed session 35.
Nov 25 09:34:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ignoring --setuser ceph since I am not root
Nov 25 09:34:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ignoring --setgroup ceph since I am not root
Nov 25 09:34:39 compute-0 ceph-mgr[74476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 09:34:39 compute-0 ceph-mgr[74476]: pidfile_write: ignore empty --pid-file
Nov 25 09:34:39 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'alerts'
Nov 25 09:34:39 compute-0 ceph-mgr[74476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:34:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:39.790+0000 7f5f3506c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 09:34:39 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'balancer'
Nov 25 09:34:39 compute-0 ceph-mgr[74476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:34:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:39.862+0000 7f5f3506c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 09:34:39 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'cephadm'
Nov 25 09:34:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:34:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:40.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:34:40 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'crash'
Nov 25 09:34:40 compute-0 ceph-mgr[74476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:34:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:40.525+0000 7f5f3506c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 09:34:40 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'dashboard'
Nov 25 09:34:40 compute-0 ceph-mon[74207]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 25 09:34:40 compute-0 ceph-mon[74207]: mgrmap e25: compute-0.zcfgby(active, since 47s), standbys: compute-1.plffrn, compute-2.flybft
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'devicehealth'
Nov 25 09:34:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:41.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:41.075+0000 7f5f3506c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 09:34:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 09:34:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 09:34:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   from numpy import show_config as show_numpy_config
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:41.215+0000 7f5f3506c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'influx'
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:41.275+0000 7f5f3506c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'insights'
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'iostat'
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:41.393+0000 7f5f3506c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'k8sevents'
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'localpool'
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 09:34:41 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'mirroring'
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'nfs'
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:42.230+0000 7f5f3506c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'orchestrator'
Nov 25 09:34:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:34:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:42.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:42.417+0000 7f5f3506c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:42.482+0000 7f5f3506c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'osd_support'
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:42.540+0000 7f5f3506c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:42.607+0000 7f5f3506c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'progress'
Nov 25 09:34:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:42.675+0000 7f5f3506c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'prometheus'
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:42.974+0000 7f5f3506c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 09:34:42 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rbd_support'
Nov 25 09:34:43 compute-0 ceph-mgr[74476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:43.057+0000 7f5f3506c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'restful'
Nov 25 09:34:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:43.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rgw'
Nov 25 09:34:43 compute-0 ceph-mgr[74476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:43.425+0000 7f5f3506c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'rook'
Nov 25 09:34:43 compute-0 ceph-mgr[74476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:43.907+0000 7f5f3506c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'selftest'
Nov 25 09:34:43 compute-0 ceph-mgr[74476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:43.969+0000 7f5f3506c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 09:34:43 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'snap_schedule'
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'stats'
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.038+0000 7f5f3506c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'status'
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.166+0000 7f5f3506c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'telegraf'
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.227+0000 7f5f3506c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'telemetry'
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.361+0000 7f5f3506c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 09:34:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:44.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.553+0000 7f5f3506c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'volumes'
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.784+0000 7f5f3506c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Loading python module 'zabbix'
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.plffrn restarted
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.plffrn started
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.845+0000 7f5f3506c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Active manager daemon compute-0.zcfgby restarted
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zcfgby
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: ms_deliver_dispatch: unhandled message 0x564c3046f860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr handle_mgr_map Activating!
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr handle_mgr_map I am now activating
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.zcfgby(active, starting, since 0.0237431s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.pwazzx"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.pwazzx"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e8 all = 0
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.knpqas"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.knpqas"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e8 all = 0
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.wjveyw"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wjveyw"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e8 all = 0
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).mds e8 all = 1
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: Standby manager daemon compute-1.plffrn restarted
Nov 25 09:34:44 compute-0 ceph-mon[74207]: Standby manager daemon compute-1.plffrn started
Nov 25 09:34:44 compute-0 ceph-mon[74207]: Active manager daemon compute-0.zcfgby restarted
Nov 25 09:34:44 compute-0 ceph-mon[74207]: Activating manager daemon compute-0.zcfgby
Nov 25 09:34:44 compute-0 ceph-mon[74207]: osdmap e42: 3 total, 3 up, 3 in
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mgrmap e26: compute-0.zcfgby(active, starting, since 0.0237431s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:34:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.pwazzx"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.knpqas"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wjveyw"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: balancer
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Starting
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Manager daemon compute-0.zcfgby is now available
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:34:44
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.flybft restarted
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.flybft started
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: cephadm
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: crash
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: dashboard
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [dashboard INFO access_control] Loading user roles DB version=2
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [dashboard INFO sso] Loading SSO DB version=1
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: devicehealth
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: iostat
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: nfs
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: orchestrator
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [devicehealth INFO root] Starting
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: pg_autoscaler
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: progress
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: prometheus
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [progress INFO root] Loading...
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f5edcc236d0>, <progress.module.GhostEvent object at 0x7f5edcc23700>, <progress.module.GhostEvent object at 0x7f5ed63da700>, <progress.module.GhostEvent object at 0x7f5ed63da760>, <progress.module.GhostEvent object at 0x7f5ed63da730>, <progress.module.GhostEvent object at 0x7f5ed63da070>, <progress.module.GhostEvent object at 0x7f5ed63da970>, <progress.module.GhostEvent object at 0x7f5ed63da9d0>, <progress.module.GhostEvent object at 0x7f5ed63daa00>, <progress.module.GhostEvent object at 0x7f5ed63da9a0>, <progress.module.GhostEvent object at 0x7f5ed63daa30>] historic events
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [prometheus INFO root] server_addr: :: server_port: 9283
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [prometheus INFO root] Cache enabled
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [prometheus INFO root] starting metric collection thread
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] recovery thread starting
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] starting setup
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: rbd_support
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: restful
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: status
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: telemetry
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [prometheus INFO root] Starting engine...
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: [25/Nov/2025:09:34:44] ENGINE Bus STARTING
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: CherryPy Checker:
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: The Application mounted at '' has an empty config.
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:34:44] ENGINE Bus STARTING
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [restful WARNING root] server not running: no certificate configured
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: mgr load Constructed class from module: volumes
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.969+0000 7f5ec3307640 -1 client.0 error registering admin socket command: (17) File exists
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: client.0 error registering admin socket command: (17) File exists
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.971+0000 7f5ebf13f640 -1 client.0 error registering admin socket command: (17) File exists
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: client.0 error registering admin socket command: (17) File exists
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.971+0000 7f5ebf13f640 -1 client.0 error registering admin socket command: (17) File exists
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: client.0 error registering admin socket command: (17) File exists
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.971+0000 7f5ebf13f640 -1 client.0 error registering admin socket command: (17) File exists
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: client.0 error registering admin socket command: (17) File exists
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.971+0000 7f5ebf13f640 -1 client.0 error registering admin socket command: (17) File exists
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: client.0 error registering admin socket command: (17) File exists
Nov 25 09:34:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:44.971+0000 7f5ebf13f640 -1 client.0 error registering admin socket command: (17) File exists
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: client.0 error registering admin socket command: (17) File exists
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] PerfHandler: starting
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TaskHandler: starting
Nov 25 09:34:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"} v 0)
Nov 25 09:34:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:34:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] setup complete
Nov 25 09:34:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: [25/Nov/2025:09:34:45] ENGINE Serving on http://:::9283
Nov 25 09:34:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: [25/Nov/2025:09:34:45] ENGINE Bus STARTED
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:34:45] ENGINE Serving on http://:::9283
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:34:45] ENGINE Bus STARTED
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [prometheus INFO root] Engine started.
Nov 25 09:34:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:45.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:45 compute-0 sshd-session[98587]: Accepted publickey for ceph-admin from 192.168.122.100 port 50358 ssh2: RSA SHA256:9k4SW9JXeQ+nzxgg2xiWHFR9hVPc7R5P3piA8/i+uwY
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Nov 25 09:34:45 compute-0 systemd-logind[744]: New session 37 of user ceph-admin.
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Nov 25 09:34:45 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Nov 25 09:34:45 compute-0 sshd-session[98587]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Nov 25 09:34:45 compute-0 sudo[98594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:45 compute-0 sudo[98594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:45 compute-0 sudo[98594]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:45 compute-0 sudo[98627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:34:45 compute-0 sudo[98627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: [dashboard INFO dashboard.module] Engine started.
Nov 25 09:34:45 compute-0 podman[98712]: 2025-11-25 09:34:45.770036593 +0000 UTC m=+0.041285770 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 09:34:45 compute-0 podman[98712]: 2025-11-25 09:34:45.848120761 +0000 UTC m=+0.119369939 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zcfgby", "id": "compute-0.zcfgby"}]: dispatch
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-2.flybft", "id": "compute-2.flybft"}]: dispatch
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr metadata", "who": "compute-1.plffrn", "id": "compute-1.plffrn"}]: dispatch
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 09:34:45 compute-0 ceph-mon[74207]: Manager daemon compute-0.zcfgby is now available
Nov 25 09:34:45 compute-0 ceph-mon[74207]: Standby manager daemon compute-2.flybft restarted
Nov 25 09:34:45 compute-0 ceph-mon[74207]: Standby manager daemon compute-2.flybft started
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 09:34:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 09:34:45 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.zcfgby(active, since 1.06329s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:34:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v3: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:46 compute-0 podman[98824]: 2025-11-25 09:34:46.225424169 +0000 UTC m=+0.036303989 container exec dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:46 compute-0 podman[98824]: 2025-11-25 09:34:46.234033431 +0000 UTC m=+0.044913250 container exec_died dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:46 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:34:46] ENGINE Bus STARTING
Nov 25 09:34:46 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:34:46] ENGINE Bus STARTING
Nov 25 09:34:46 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:34:46] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:34:46 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:34:46] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:34:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:34:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:34:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:34:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:46.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:34:46 compute-0 podman[98919]: 2025-11-25 09:34:46.424931004 +0000 UTC m=+0.035054973 container exec 26e220db1d5c7d27472c73e3f52d829b2b169c850bfd4cac7803406968b3e9da (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:46 compute-0 podman[98919]: 2025-11-25 09:34:46.44605995 +0000 UTC m=+0.056183909 container exec_died 26e220db1d5c7d27472c73e3f52d829b2b169c850bfd4cac7803406968b3e9da (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:46 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:34:46] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:34:46 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:34:46] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:34:46 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:34:46] ENGINE Bus STARTED
Nov 25 09:34:46 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:34:46] ENGINE Bus STARTED
Nov 25 09:34:46 compute-0 ceph-mgr[74476]: [cephadm INFO cherrypy.error] [25/Nov/2025:09:34:46] ENGINE Client ('192.168.122.100', 39184) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:34:46 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : [25/Nov/2025:09:34:46] ENGINE Client ('192.168.122.100', 39184) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:34:46 compute-0 podman[98979]: 2025-11-25 09:34:46.585057553 +0000 UTC m=+0.036634660 container exec e68646e3fd07566db62d42edd0b076924b9245803f1164555eac0bfb296d8565 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:34:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:34:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:46 compute-0 podman[98979]: 2025-11-25 09:34:46.718838889 +0000 UTC m=+0.170415998 container exec_died e68646e3fd07566db62d42edd0b076924b9245803f1164555eac0bfb296d8565 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:34:46 compute-0 podman[99038]: 2025-11-25 09:34:46.86450741 +0000 UTC m=+0.038356998 container exec e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:34:46 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v4: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:46 compute-0 ceph-mon[74207]: mgrmap e27: compute-0.zcfgby(active, since 1.06329s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:34:46 compute-0 ceph-mon[74207]: [25/Nov/2025:09:34:46] ENGINE Bus STARTING
Nov 25 09:34:46 compute-0 ceph-mon[74207]: [25/Nov/2025:09:34:46] ENGINE Serving on http://192.168.122.100:8765
Nov 25 09:34:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:46 compute-0 ceph-mon[74207]: [25/Nov/2025:09:34:46] ENGINE Serving on https://192.168.122.100:7150
Nov 25 09:34:46 compute-0 ceph-mon[74207]: [25/Nov/2025:09:34:46] ENGINE Bus STARTED
Nov 25 09:34:46 compute-0 ceph-mon[74207]: [25/Nov/2025:09:34:46] ENGINE Client ('192.168.122.100', 39184) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 09:34:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:46 compute-0 podman[99055]: 2025-11-25 09:34:46.925972986 +0000 UTC m=+0.050522645 container exec_died e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:34:46 compute-0 podman[99038]: 2025-11-25 09:34:46.928449073 +0000 UTC m=+0.102298661 container exec_died e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:34:46 compute-0 ceph-mgr[74476]: [devicehealth INFO root] Check health
Nov 25 09:34:47 compute-0 podman[99101]: 2025-11-25 09:34:47.064974496 +0000 UTC m=+0.035415773 container exec 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, vcs-type=git, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, description=keepalived for Ceph, name=keepalived)
Nov 25 09:34:47 compute-0 podman[99101]: 2025-11-25 09:34:47.075089075 +0000 UTC m=+0.045530353 container exec_died 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 09:34:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:47.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:34:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:34:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 25 09:34:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:34:47 compute-0 podman[99151]: 2025-11-25 09:34:47.216553227 +0000 UTC m=+0.036896003 container exec 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:47 compute-0 podman[99151]: 2025-11-25 09:34:47.236955353 +0000 UTC m=+0.057298108 container exec_died 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:47 compute-0 sudo[98627]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:34:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:34:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:47 compute-0 sudo[99182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:47 compute-0 sudo[99182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:47 compute-0 sudo[99182]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:47 compute-0 sudo[99207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:34:47 compute-0 sudo[99207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:34:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:34:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 25 09:34:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:34:47 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.zcfgby(active, since 2s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:34:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:47 compute-0 sudo[99207]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:47 compute-0 sudo[99262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:47 compute-0 sudo[99262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:47 compute-0 sudo[99262]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:47 compute-0 sudo[99287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 25 09:34:47 compute-0 sudo[99287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99287]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:34:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:34:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 25 09:34:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:34:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:34:48 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:34:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:34:48 compute-0 sudo[99329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:34:48 compute-0 sudo[99329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99329]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:34:48 compute-0 sudo[99354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99354]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:34:48 compute-0 sudo[99379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99379]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 ceph-mon[74207]: pgmap v4: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:34:48 compute-0 ceph-mon[74207]: mgrmap e28: compute-0.zcfgby(active, since 2s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:34:48 compute-0 sudo[99404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:48 compute-0 sudo[99404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99404]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:34:48 compute-0 sudo[99429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99429]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:34:48 compute-0 sudo[99477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99477]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new
Nov 25 09:34:48 compute-0 sudo[99502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99502]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 25 09:34:48 compute-0 sudo[99527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99527]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:34:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:48.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:48 compute-0 sudo[99552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:34:48 compute-0 sudo[99552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99552]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:34:48 compute-0 sudo[99577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99577]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:34:48 compute-0 sudo[99602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99602]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:48 compute-0 sudo[99627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99627]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:34:48 compute-0 sudo[99652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99652]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:34:48 compute-0 sudo[99700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99700]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new
Nov 25 09:34:48 compute-0 sudo[99725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99725]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:48 compute-0 sudo[99750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:48 compute-0 sudo[99750]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:48 compute-0 sudo[99775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 09:34:48 compute-0 sudo[99775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99775]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v5: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:48 compute-0 sudo[99800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph
Nov 25 09:34:48 compute-0 sudo[99800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99800]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:34:48 compute-0 sudo[99825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99825]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:48 compute-0 sudo[99850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:48 compute-0 sudo[99850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:48 compute-0 sudo[99850]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 sudo[99875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-0 sudo[99875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-0 sudo[99875]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:49.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:49 compute-0 sudo[99923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-0 sudo[99923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-0 sudo[99923]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:49 compute-0 sudo[99948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-0 sudo[99948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-0 sudo[99948]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:49 compute-0 sudo[99973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:49 compute-0 sudo[99973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-0 sudo[99973]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:49 compute-0 ceph-mon[74207]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 09:34:49 compute-0 ceph-mon[74207]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 09:34:49 compute-0 ceph-mon[74207]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 09:34:49 compute-0 ceph-mon[74207]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:49 compute-0 ceph-mon[74207]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:49 compute-0 ceph-mon[74207]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.zcfgby(active, since 4s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:34:49 compute-0 sudo[99998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:34:49 compute-0 sudo[99998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-0 sudo[99998]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 sudo[100023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config
Nov 25 09:34:49 compute-0 sudo[100023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-0 sudo[100023]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 sudo[100048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-0 sudo[100048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-0 sudo[100048]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 sudo[100073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:49 compute-0 sudo[100073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-0 sudo[100073]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 sudo[100098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-0 sudo[100098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-0 sudo[100098]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:49 compute-0 sudo[100146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-0 sudo[100146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-0 sudo[100146]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:49 compute-0 sudo[100171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new
Nov 25 09:34:49 compute-0 sudo[100171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-0 sudo[100171]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 sudo[100196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring.new /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:49 compute-0 sudo[100196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:49 compute-0 sudo[100196]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 8a4fd458-4324-44da-a0a0-41ced6496f3a (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [progress INFO root] fail: finished ev 8a4fd458-4324-44da-a0a0-41ced6496f3a (Updating ingress.nfs.cephfs deployment (+6 -> 6)): max() arg is an empty sequence
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 8a4fd458-4324-44da-a0a0-41ced6496f3a (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 0 seconds
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [cephadm ERROR cephadm.serve] Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress
                                           service_id: nfs.cephfs
                                           service_name: ingress.nfs.cephfs
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           spec:
                                             backend_service: nfs.cephfs
                                             enable_haproxy_protocol: true
                                             first_virtual_router_id: 50
                                             frontend_port: 2049
                                             monitor_port: 9049
                                             virtual_ip: 192.168.122.2/24
                                           ''')): max() arg is an empty sequence
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services
                                               if self._apply_service(spec):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service
                                               daemon_spec = svc.prepare_create(daemon_spec)
                                             File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create
                                               return self.haproxy_prepare_create(daemon_spec)
                                             File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create
                                               daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)
                                             File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config
                                               num_ranks = 1 + max(by_rank.keys())
                                           ValueError: max() arg is an empty sequence
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [ERR] : Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress
                                           service_id: nfs.cephfs
                                           service_name: ingress.nfs.cephfs
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           spec:
                                             backend_service: nfs.cephfs
                                             enable_haproxy_protocol: true
                                             first_virtual_router_id: 50
                                             frontend_port: 2049
                                             monitor_port: 9049
                                             virtual_ip: 192.168.122.2/24
                                           ''')): max() arg is an empty sequence
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services
                                               if self._apply_service(spec):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service
                                               daemon_spec = svc.prepare_create(daemon_spec)
                                             File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create
                                               return self.haproxy_prepare_create(daemon_spec)
                                             File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create
                                               daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)
                                             File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config
                                               num_ranks = 1 + max(by_rank.keys())
                                           ValueError: max() arg is an empty sequence
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v6: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 41cd2523-4efd-47ea-b4d2-6989f355938a (Updating nfs.cephfs deployment (+3 -> 3))
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T09:34:49.666+0000 7f5ee3495640 -1 log_channel(cephadm) log [ERR] : Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: service_id: nfs.cephfs
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: service_name: ingress.nfs.cephfs
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: placement:
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   hosts:
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   - compute-0
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   - compute-1
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   - compute-2
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: spec:
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   backend_service: nfs.cephfs
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   enable_haproxy_protocol: true
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   first_virtual_router_id: 50
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   frontend_port: 2049
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   monitor_port: 9049
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   virtual_ip: 192.168.122.2/24
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ''')): max() arg is an empty sequence
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: Traceback (most recent call last):
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:     if self._apply_service(spec):
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:     daemon_spec = svc.prepare_create(daemon_spec)
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:     return self.haproxy_prepare_create(daemon_spec)
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:     daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:   File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]:     num_ranks = 1 + max(by_rank.keys())
Nov 25 09:34:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ValueError: max() arg is an empty sequence
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.yfzsxe
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.yfzsxe
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.yfzsxe's ganesha conf is defaulting to empty
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.yfzsxe's ganesha conf is defaulting to empty
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.yfzsxe on compute-1
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.yfzsxe on compute-1
Nov 25 09:34:49 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 12 completed events
Nov 25 09:34:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:34:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:50 compute-0 ceph-mon[74207]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:50 compute-0 ceph-mon[74207]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 09:34:50 compute-0 ceph-mon[74207]: pgmap v5: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:50 compute-0 ceph-mon[74207]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:50 compute-0 ceph-mon[74207]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:50 compute-0 ceph-mon[74207]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 09:34:50 compute-0 ceph-mon[74207]: mgrmap e29: compute-0.zcfgby(active, since 4s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress
                                           service_id: nfs.cephfs
                                           service_name: ingress.nfs.cephfs
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           spec:
                                             backend_service: nfs.cephfs
                                             enable_haproxy_protocol: true
                                             first_virtual_router_id: 50
                                             frontend_port: 2049
                                             monitor_port: 9049
                                             virtual_ip: 192.168.122.2/24
                                           ''')): max() arg is an empty sequence
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services
                                               if self._apply_service(spec):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service
                                               daemon_spec = svc.prepare_create(daemon_spec)
                                             File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create
                                               return self.haproxy_prepare_create(daemon_spec)
                                             File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create
                                               daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)
                                             File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config
                                               num_ranks = 1 + max(by_rank.keys())
                                           ValueError: max() arg is an empty sequence
Nov 25 09:34:50 compute-0 ceph-mon[74207]: pgmap v6: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: Creating key for client.nfs.cephfs.0.0.compute-1.yfzsxe
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 25 09:34:50 compute-0 ceph-mon[74207]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:34:50] "GET /metrics HTTP/1.1" 200 46565 "" "Prometheus/2.51.0"
Nov 25 09:34:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:34:50] "GET /metrics HTTP/1.1" 200 46565 "" "Prometheus/2.51.0"
Nov 25 09:34:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:50.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:50 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 1 service(s): ingress.nfs.cephfs (CEPHADM_APPLY_SPEC_FAIL)
Nov 25 09:34:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:34:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:34:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:34:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:50 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.jouchy
Nov 25 09:34:50 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.jouchy
Nov 25 09:34:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Nov 25 09:34:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 25 09:34:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 25 09:34:50 compute-0 ceph-mgr[74476]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 25 09:34:50 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 25 09:34:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Nov 25 09:34:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 25 09:34:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 25 09:34:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:34:50 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Nov 25 09:34:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 25 09:34:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 25 09:34:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:51 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Nov 25 09:34:51 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Nov 25 09:34:51 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.jouchy-rgw
Nov 25 09:34:51 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.jouchy-rgw
Nov 25 09:34:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 25 09:34:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:34:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:34:51 compute-0 ceph-mgr[74476]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.jouchy's ganesha conf is defaulting to empty
Nov 25 09:34:51 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.jouchy's ganesha conf is defaulting to empty
Nov 25 09:34:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:34:51 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:51 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.jouchy on compute-2
Nov 25 09:34:51 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.jouchy on compute-2
Nov 25 09:34:51 compute-0 ceph-mon[74207]: Rados config object exists: conf-nfs.cephfs
Nov 25 09:34:51 compute-0 ceph-mon[74207]: Creating key for client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw
Nov 25 09:34:51 compute-0 ceph-mon[74207]: Bind address in nfs.cephfs.0.0.compute-1.yfzsxe's ganesha conf is defaulting to empty
Nov 25 09:34:51 compute-0 ceph-mon[74207]: Deploying daemon nfs.cephfs.0.0.compute-1.yfzsxe on compute-1
Nov 25 09:34:51 compute-0 ceph-mon[74207]: Health check failed: Failed to apply 1 service(s): ingress.nfs.cephfs (CEPHADM_APPLY_SPEC_FAIL)
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:34:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v7: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 355 B/s wr, 13 op/s
Nov 25 09:34:52 compute-0 ceph-mon[74207]: Creating key for client.nfs.cephfs.1.0.compute-2.jouchy
Nov 25 09:34:52 compute-0 ceph-mon[74207]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 25 09:34:52 compute-0 ceph-mon[74207]: Rados config object exists: conf-nfs.cephfs
Nov 25 09:34:52 compute-0 ceph-mon[74207]: Creating key for client.nfs.cephfs.1.0.compute-2.jouchy-rgw
Nov 25 09:34:52 compute-0 ceph-mon[74207]: Bind address in nfs.cephfs.1.0.compute-2.jouchy's ganesha conf is defaulting to empty
Nov 25 09:34:52 compute-0 ceph-mon[74207]: Deploying daemon nfs.cephfs.1.0.compute-2.jouchy on compute-2
Nov 25 09:34:52 compute-0 ceph-mon[74207]: pgmap v7: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 355 B/s wr, 13 op/s
Nov 25 09:34:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:34:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:34:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:34:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:52 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.rychik
Nov 25 09:34:52 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.rychik
Nov 25 09:34:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Nov 25 09:34:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 25 09:34:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 25 09:34:52 compute-0 ceph-mgr[74476]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 25 09:34:52 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 25 09:34:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Nov 25 09:34:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 25 09:34:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 25 09:34:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:34:52 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:52.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:53 compute-0 ceph-mon[74207]: Creating key for client.nfs.cephfs.2.0.compute-0.rychik
Nov 25 09:34:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 25 09:34:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 25 09:34:53 compute-0 ceph-mon[74207]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 25 09:34:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 25 09:34:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 25 09:34:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v8: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 264 B/s wr, 9 op/s
Nov 25 09:34:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Nov 25 09:34:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 25 09:34:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 25 09:34:54 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Nov 25 09:34:54 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Nov 25 09:34:54 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.rychik-rgw
Nov 25 09:34:54 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.rychik-rgw
Nov 25 09:34:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 25 09:34:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:34:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:34:54 compute-0 ceph-mgr[74476]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.rychik's ganesha conf is defaulting to empty
Nov 25 09:34:54 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.rychik's ganesha conf is defaulting to empty
Nov 25 09:34:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:34:54 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:54 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.rychik on compute-0
Nov 25 09:34:54 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.rychik on compute-0
Nov 25 09:34:54 compute-0 sudo[100332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:54 compute-0 sudo[100332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:54 compute-0 sudo[100332]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:54 compute-0 sudo[100357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:34:54 compute-0 sudo[100357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:54 compute-0 ceph-mon[74207]: pgmap v8: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 264 B/s wr, 9 op/s
Nov 25 09:34:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 25 09:34:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 25 09:34:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 09:34:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 09:34:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:54.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:54 compute-0 podman[100416]: 2025-11-25 09:34:54.571469053 +0000 UTC m=+0.031971449 container create 8e6230b4f6dfd91b7500bbfb113c4ec249e03eb49a880f089e4789a7383dce53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hypatia, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:34:54 compute-0 systemd[1]: Started libpod-conmon-8e6230b4f6dfd91b7500bbfb113c4ec249e03eb49a880f089e4789a7383dce53.scope.
Nov 25 09:34:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:54 compute-0 podman[100416]: 2025-11-25 09:34:54.640160442 +0000 UTC m=+0.100662847 container init 8e6230b4f6dfd91b7500bbfb113c4ec249e03eb49a880f089e4789a7383dce53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 09:34:54 compute-0 podman[100416]: 2025-11-25 09:34:54.644489413 +0000 UTC m=+0.104991799 container start 8e6230b4f6dfd91b7500bbfb113c4ec249e03eb49a880f089e4789a7383dce53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 25 09:34:54 compute-0 podman[100416]: 2025-11-25 09:34:54.645801728 +0000 UTC m=+0.106304113 container attach 8e6230b4f6dfd91b7500bbfb113c4ec249e03eb49a880f089e4789a7383dce53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 09:34:54 compute-0 reverent_hypatia[100429]: 167 167
Nov 25 09:34:54 compute-0 systemd[1]: libpod-8e6230b4f6dfd91b7500bbfb113c4ec249e03eb49a880f089e4789a7383dce53.scope: Deactivated successfully.
Nov 25 09:34:54 compute-0 conmon[100429]: conmon 8e6230b4f6dfd91b7500 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e6230b4f6dfd91b7500bbfb113c4ec249e03eb49a880f089e4789a7383dce53.scope/container/memory.events
Nov 25 09:34:54 compute-0 podman[100416]: 2025-11-25 09:34:54.649063748 +0000 UTC m=+0.109566133 container died 8e6230b4f6dfd91b7500bbfb113c4ec249e03eb49a880f089e4789a7383dce53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hypatia, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:34:54 compute-0 podman[100416]: 2025-11-25 09:34:54.559272087 +0000 UTC m=+0.019774492 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-afccc68f8396fa5fc8a89b0953dccc6bba02da372bb043a59e859ef8095d353a-merged.mount: Deactivated successfully.
Nov 25 09:34:54 compute-0 podman[100416]: 2025-11-25 09:34:54.67442311 +0000 UTC m=+0.134925505 container remove 8e6230b4f6dfd91b7500bbfb113c4ec249e03eb49a880f089e4789a7383dce53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:34:54 compute-0 systemd[1]: libpod-conmon-8e6230b4f6dfd91b7500bbfb113c4ec249e03eb49a880f089e4789a7383dce53.scope: Deactivated successfully.
Nov 25 09:34:54 compute-0 systemd[1]: Reloading.
Nov 25 09:34:54 compute-0 systemd-sysv-generator[100467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:54 compute-0 systemd-rc-local-generator[100464]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:54 compute-0 systemd[1]: Reloading.
Nov 25 09:34:55 compute-0 systemd-sysv-generator[100508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:34:55 compute-0 systemd-rc-local-generator[100505]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:34:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:55.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:55 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:34:55 compute-0 ceph-mon[74207]: Rados config object exists: conf-nfs.cephfs
Nov 25 09:34:55 compute-0 ceph-mon[74207]: Creating key for client.nfs.cephfs.2.0.compute-0.rychik-rgw
Nov 25 09:34:55 compute-0 ceph-mon[74207]: Bind address in nfs.cephfs.2.0.compute-0.rychik's ganesha conf is defaulting to empty
Nov 25 09:34:55 compute-0 ceph-mon[74207]: Deploying daemon nfs.cephfs.2.0.compute-0.rychik on compute-0
Nov 25 09:34:55 compute-0 podman[100559]: 2025-11-25 09:34:55.353429761 +0000 UTC m=+0.030772299 container create f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1df7d68d109352c6fe1c1b64ef69906d11c3f494c2902523e11aaa7e1e1ad3dc/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1df7d68d109352c6fe1c1b64ef69906d11c3f494c2902523e11aaa7e1e1ad3dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1df7d68d109352c6fe1c1b64ef69906d11c3f494c2902523e11aaa7e1e1ad3dc/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1df7d68d109352c6fe1c1b64ef69906d11c3f494c2902523e11aaa7e1e1ad3dc/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:55 compute-0 podman[100559]: 2025-11-25 09:34:55.392328379 +0000 UTC m=+0.069670917 container init f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:34:55 compute-0 podman[100559]: 2025-11-25 09:34:55.39797195 +0000 UTC m=+0.075314477 container start f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:34:55 compute-0 bash[100559]: f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195
Nov 25 09:34:55 compute-0 podman[100559]: 2025-11-25 09:34:55.340961785 +0000 UTC m=+0.018304333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:55 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:34:55 compute-0 sudo[100357]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:34:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:34:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:34:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:34:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:55 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev 41cd2523-4efd-47ea-b4d2-6989f355938a (Updating nfs.cephfs deployment (+3 -> 3))
Nov 25 09:34:55 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 41cd2523-4efd-47ea-b4d2-6989f355938a (Updating nfs.cephfs deployment (+3 -> 3)) in 6 seconds
Nov 25 09:34:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:34:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:34:55 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:34:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:34:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:34:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:34:55 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:34:55 compute-0 sudo[100598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:55 compute-0 sudo[100598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:55 compute-0 sudo[100598]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=0
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:34:55 compute-0 sudo[100638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:34:55 compute-0 sudo[100638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[reaper] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Nov 25 09:34:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:34:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:34:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v9: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 209 B/s wr, 7 op/s
Nov 25 09:34:55 compute-0 podman[100708]: 2025-11-25 09:34:55.882658455 +0000 UTC m=+0.027657596 container create e5b0f2b0b32502c79b365295d1af2902fda967fdd21823c6b119c79481abfac1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rosalind, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:55 compute-0 systemd[1]: Started libpod-conmon-e5b0f2b0b32502c79b365295d1af2902fda967fdd21823c6b119c79481abfac1.scope.
Nov 25 09:34:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:55 compute-0 podman[100708]: 2025-11-25 09:34:55.931552909 +0000 UTC m=+0.076552070 container init e5b0f2b0b32502c79b365295d1af2902fda967fdd21823c6b119c79481abfac1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 25 09:34:55 compute-0 podman[100708]: 2025-11-25 09:34:55.939022953 +0000 UTC m=+0.084022094 container start e5b0f2b0b32502c79b365295d1af2902fda967fdd21823c6b119c79481abfac1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rosalind, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:34:55 compute-0 podman[100708]: 2025-11-25 09:34:55.940203298 +0000 UTC m=+0.085202459 container attach e5b0f2b0b32502c79b365295d1af2902fda967fdd21823c6b119c79481abfac1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:55 compute-0 amazing_rosalind[100721]: 167 167
Nov 25 09:34:55 compute-0 systemd[1]: libpod-e5b0f2b0b32502c79b365295d1af2902fda967fdd21823c6b119c79481abfac1.scope: Deactivated successfully.
Nov 25 09:34:55 compute-0 podman[100708]: 2025-11-25 09:34:55.943340042 +0000 UTC m=+0.088339184 container died e5b0f2b0b32502c79b365295d1af2902fda967fdd21823c6b119c79481abfac1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rosalind, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7142dec49822a5b69cd3a6adaabd73330e7f859e80246b4bbbd2e6ec9ad6aae-merged.mount: Deactivated successfully.
Nov 25 09:34:55 compute-0 podman[100708]: 2025-11-25 09:34:55.959331432 +0000 UTC m=+0.104330573 container remove e5b0f2b0b32502c79b365295d1af2902fda967fdd21823c6b119c79481abfac1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rosalind, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:55 compute-0 podman[100708]: 2025-11-25 09:34:55.870680432 +0000 UTC m=+0.015679583 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:55 compute-0 systemd[1]: libpod-conmon-e5b0f2b0b32502c79b365295d1af2902fda967fdd21823c6b119c79481abfac1.scope: Deactivated successfully.
Nov 25 09:34:56 compute-0 podman[100743]: 2025-11-25 09:34:56.070418873 +0000 UTC m=+0.027355996 container create f2df689a6d43cc96dd7dfbf8b6fcc1333f053c50603dc52c9eff5a0fcfe9fd22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lalande, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:34:56 compute-0 systemd[1]: Started libpod-conmon-f2df689a6d43cc96dd7dfbf8b6fcc1333f053c50603dc52c9eff5a0fcfe9fd22.scope.
Nov 25 09:34:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39dc18a541a95c374f574bc0ac0eef84c29ea6c247f7706aa5375dc10e32841/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39dc18a541a95c374f574bc0ac0eef84c29ea6c247f7706aa5375dc10e32841/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39dc18a541a95c374f574bc0ac0eef84c29ea6c247f7706aa5375dc10e32841/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39dc18a541a95c374f574bc0ac0eef84c29ea6c247f7706aa5375dc10e32841/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39dc18a541a95c374f574bc0ac0eef84c29ea6c247f7706aa5375dc10e32841/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:56 compute-0 podman[100743]: 2025-11-25 09:34:56.129245744 +0000 UTC m=+0.086182867 container init f2df689a6d43cc96dd7dfbf8b6fcc1333f053c50603dc52c9eff5a0fcfe9fd22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:34:56 compute-0 podman[100743]: 2025-11-25 09:34:56.136717782 +0000 UTC m=+0.093654905 container start f2df689a6d43cc96dd7dfbf8b6fcc1333f053c50603dc52c9eff5a0fcfe9fd22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:34:56 compute-0 podman[100743]: 2025-11-25 09:34:56.143828288 +0000 UTC m=+0.100765410 container attach f2df689a6d43cc96dd7dfbf8b6fcc1333f053c50603dc52c9eff5a0fcfe9fd22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lalande, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:34:56 compute-0 podman[100743]: 2025-11-25 09:34:56.059460894 +0000 UTC m=+0.016398017 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:56 compute-0 heuristic_lalande[100756]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:34:56 compute-0 heuristic_lalande[100756]: --> All data devices are unavailable
Nov 25 09:34:56 compute-0 systemd[1]: libpod-f2df689a6d43cc96dd7dfbf8b6fcc1333f053c50603dc52c9eff5a0fcfe9fd22.scope: Deactivated successfully.
Nov 25 09:34:56 compute-0 podman[100743]: 2025-11-25 09:34:56.380264819 +0000 UTC m=+0.337201952 container died f2df689a6d43cc96dd7dfbf8b6fcc1333f053c50603dc52c9eff5a0fcfe9fd22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lalande, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:34:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f39dc18a541a95c374f574bc0ac0eef84c29ea6c247f7706aa5375dc10e32841-merged.mount: Deactivated successfully.
Nov 25 09:34:56 compute-0 podman[100743]: 2025-11-25 09:34:56.401095842 +0000 UTC m=+0.358032975 container remove f2df689a6d43cc96dd7dfbf8b6fcc1333f053c50603dc52c9eff5a0fcfe9fd22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 09:34:56 compute-0 systemd[1]: libpod-conmon-f2df689a6d43cc96dd7dfbf8b6fcc1333f053c50603dc52c9eff5a0fcfe9fd22.scope: Deactivated successfully.
Nov 25 09:34:56 compute-0 sudo[100638]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:56.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:56 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:56 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:56 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:56 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:56 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:34:56 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:34:56 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:34:56 compute-0 ceph-mon[74207]: pgmap v9: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 209 B/s wr, 7 op/s
Nov 25 09:34:56 compute-0 sudo[100781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:56 compute-0 sudo[100781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:56 compute-0 sudo[100781]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:56 compute-0 sudo[100806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:34:56 compute-0 sudo[100806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:56 compute-0 podman[100862]: 2025-11-25 09:34:56.807384468 +0000 UTC m=+0.027141643 container create 151bd922aeda861030db13645f7fee38cfd9a4185026930a7e7ca88b4452ce1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bouman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:56 compute-0 systemd[1]: Started libpod-conmon-151bd922aeda861030db13645f7fee38cfd9a4185026930a7e7ca88b4452ce1f.scope.
Nov 25 09:34:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:56 compute-0 podman[100862]: 2025-11-25 09:34:56.864245052 +0000 UTC m=+0.084002227 container init 151bd922aeda861030db13645f7fee38cfd9a4185026930a7e7ca88b4452ce1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 25 09:34:56 compute-0 podman[100862]: 2025-11-25 09:34:56.868571328 +0000 UTC m=+0.088328503 container start 151bd922aeda861030db13645f7fee38cfd9a4185026930a7e7ca88b4452ce1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bouman, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 25 09:34:56 compute-0 podman[100862]: 2025-11-25 09:34:56.869694347 +0000 UTC m=+0.089451521 container attach 151bd922aeda861030db13645f7fee38cfd9a4185026930a7e7ca88b4452ce1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:34:56 compute-0 compassionate_bouman[100875]: 167 167
Nov 25 09:34:56 compute-0 systemd[1]: libpod-151bd922aeda861030db13645f7fee38cfd9a4185026930a7e7ca88b4452ce1f.scope: Deactivated successfully.
Nov 25 09:34:56 compute-0 podman[100862]: 2025-11-25 09:34:56.872748254 +0000 UTC m=+0.092505428 container died 151bd922aeda861030db13645f7fee38cfd9a4185026930a7e7ca88b4452ce1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d621742039f55267d6c635395941c9b0780f11b7faf25c5b8c9af3dee84a6f80-merged.mount: Deactivated successfully.
Nov 25 09:34:56 compute-0 podman[100862]: 2025-11-25 09:34:56.891227063 +0000 UTC m=+0.110984237 container remove 151bd922aeda861030db13645f7fee38cfd9a4185026930a7e7ca88b4452ce1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bouman, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 25 09:34:56 compute-0 podman[100862]: 2025-11-25 09:34:56.796590638 +0000 UTC m=+0.016347812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:56 compute-0 systemd[1]: libpod-conmon-151bd922aeda861030db13645f7fee38cfd9a4185026930a7e7ca88b4452ce1f.scope: Deactivated successfully.
Nov 25 09:34:56 compute-0 sudo[100929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhsujtkbtttmglrlsrrigkhvjolkhjuk ; /usr/bin/python3'
Nov 25 09:34:57 compute-0 sudo[100929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:57 compute-0 podman[100903]: 2025-11-25 09:34:57.008144818 +0000 UTC m=+0.030036803 container create 23ce854244d73492e39f3b02b4833d369d231e7f84238377ddf3e791a8e0319f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_gates, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:34:57 compute-0 systemd[1]: Started libpod-conmon-23ce854244d73492e39f3b02b4833d369d231e7f84238377ddf3e791a8e0319f.scope.
Nov 25 09:34:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0e6185167fd11eb5d4b5d5fe307388c46ed89c2d560e88ead4e307af79dcb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0e6185167fd11eb5d4b5d5fe307388c46ed89c2d560e88ead4e307af79dcb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0e6185167fd11eb5d4b5d5fe307388c46ed89c2d560e88ead4e307af79dcb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0e6185167fd11eb5d4b5d5fe307388c46ed89c2d560e88ead4e307af79dcb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:57 compute-0 podman[100903]: 2025-11-25 09:34:57.068581112 +0000 UTC m=+0.090473117 container init 23ce854244d73492e39f3b02b4833d369d231e7f84238377ddf3e791a8e0319f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_gates, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:34:57 compute-0 podman[100903]: 2025-11-25 09:34:57.073654037 +0000 UTC m=+0.095546032 container start 23ce854244d73492e39f3b02b4833d369d231e7f84238377ddf3e791a8e0319f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:34:57 compute-0 podman[100903]: 2025-11-25 09:34:57.075766308 +0000 UTC m=+0.097658313 container attach 23ce854244d73492e39f3b02b4833d369d231e7f84238377ddf3e791a8e0319f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_gates, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:34:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:57.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:57 compute-0 podman[100903]: 2025-11-25 09:34:56.994340701 +0000 UTC m=+0.016232706 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:57 compute-0 python3[100932]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:57 compute-0 podman[100940]: 2025-11-25 09:34:57.158017774 +0000 UTC m=+0.027481024 container create c2aabde17f854a0621310b579216a4434c01ae507a0c2f71dc34dd3aef31dfb2 (image=quay.io/ceph/ceph:v19, name=hardcore_jemison, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:57 compute-0 systemd[1]: Started libpod-conmon-c2aabde17f854a0621310b579216a4434c01ae507a0c2f71dc34dd3aef31dfb2.scope.
Nov 25 09:34:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc390cfa7523f62e355998c27efc75042ff923475cd23735f1aa3f7f02d46be2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc390cfa7523f62e355998c27efc75042ff923475cd23735f1aa3f7f02d46be2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:57 compute-0 podman[100940]: 2025-11-25 09:34:57.207237251 +0000 UTC m=+0.076700501 container init c2aabde17f854a0621310b579216a4434c01ae507a0c2f71dc34dd3aef31dfb2 (image=quay.io/ceph/ceph:v19, name=hardcore_jemison, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:34:57 compute-0 podman[100940]: 2025-11-25 09:34:57.210931725 +0000 UTC m=+0.080394966 container start c2aabde17f854a0621310b579216a4434c01ae507a0c2f71dc34dd3aef31dfb2 (image=quay.io/ceph/ceph:v19, name=hardcore_jemison, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:34:57 compute-0 podman[100940]: 2025-11-25 09:34:57.211948413 +0000 UTC m=+0.081411653 container attach c2aabde17f854a0621310b579216a4434c01ae507a0c2f71dc34dd3aef31dfb2 (image=quay.io/ceph/ceph:v19, name=hardcore_jemison, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:34:57 compute-0 podman[100940]: 2025-11-25 09:34:57.146862531 +0000 UTC m=+0.016325781 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:34:57 compute-0 hardcore_jemison[100952]: could not fetch user info: no user info saved
Nov 25 09:34:57 compute-0 recursing_gates[100935]: {
Nov 25 09:34:57 compute-0 recursing_gates[100935]:     "1": [
Nov 25 09:34:57 compute-0 recursing_gates[100935]:         {
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             "devices": [
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "/dev/loop3"
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             ],
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             "lv_name": "ceph_lv0",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             "lv_size": "21470642176",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             "name": "ceph_lv0",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             "tags": {
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.cluster_name": "ceph",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.crush_device_class": "",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.encrypted": "0",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.osd_id": "1",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.type": "block",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.vdo": "0",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:                 "ceph.with_tpm": "0"
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             },
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             "type": "block",
Nov 25 09:34:57 compute-0 recursing_gates[100935]:             "vg_name": "ceph_vg0"
Nov 25 09:34:57 compute-0 recursing_gates[100935]:         }
Nov 25 09:34:57 compute-0 recursing_gates[100935]:     ]
Nov 25 09:34:57 compute-0 recursing_gates[100935]: }
Nov 25 09:34:57 compute-0 systemd[1]: libpod-23ce854244d73492e39f3b02b4833d369d231e7f84238377ddf3e791a8e0319f.scope: Deactivated successfully.
Nov 25 09:34:57 compute-0 podman[100903]: 2025-11-25 09:34:57.318125577 +0000 UTC m=+0.340017562 container died 23ce854244d73492e39f3b02b4833d369d231e7f84238377ddf3e791a8e0319f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:34:57 compute-0 systemd[1]: libpod-c2aabde17f854a0621310b579216a4434c01ae507a0c2f71dc34dd3aef31dfb2.scope: Deactivated successfully.
Nov 25 09:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b0e6185167fd11eb5d4b5d5fe307388c46ed89c2d560e88ead4e307af79dcb5-merged.mount: Deactivated successfully.
Nov 25 09:34:57 compute-0 podman[100903]: 2025-11-25 09:34:57.341248512 +0000 UTC m=+0.363140496 container remove 23ce854244d73492e39f3b02b4833d369d231e7f84238377ddf3e791a8e0319f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_gates, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:34:57 compute-0 systemd[1]: libpod-conmon-23ce854244d73492e39f3b02b4833d369d231e7f84238377ddf3e791a8e0319f.scope: Deactivated successfully.
Nov 25 09:34:57 compute-0 podman[101046]: 2025-11-25 09:34:57.365934363 +0000 UTC m=+0.026236615 container died c2aabde17f854a0621310b579216a4434c01ae507a0c2f71dc34dd3aef31dfb2 (image=quay.io/ceph/ceph:v19, name=hardcore_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 09:34:57 compute-0 sudo[100806]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc390cfa7523f62e355998c27efc75042ff923475cd23735f1aa3f7f02d46be2-merged.mount: Deactivated successfully.
Nov 25 09:34:57 compute-0 podman[101046]: 2025-11-25 09:34:57.389341525 +0000 UTC m=+0.049643767 container remove c2aabde17f854a0621310b579216a4434c01ae507a0c2f71dc34dd3aef31dfb2 (image=quay.io/ceph/ceph:v19, name=hardcore_jemison, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:34:57 compute-0 systemd[1]: libpod-conmon-c2aabde17f854a0621310b579216a4434c01ae507a0c2f71dc34dd3aef31dfb2.scope: Deactivated successfully.
Nov 25 09:34:57 compute-0 sudo[100929]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:57 compute-0 sudo[101065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:57 compute-0 sudo[101065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:57 compute-0 sudo[101065]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:57 compute-0 sudo[101091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:34:57 compute-0 sudo[101091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:57 compute-0 sudo[101139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoeredvgibkudxwfqzjyjkwxvohkzrat ; /usr/bin/python3'
Nov 25 09:34:57 compute-0 sudo[101139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:57 compute-0 python3[101141]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:34:57 compute-0 podman[101149]: 2025-11-25 09:34:57.668025718 +0000 UTC m=+0.039032521 container create ecc985fe6be93bb1ff7c337bc022ae9e6379b50f40d9552cb41ebbeb0fe7b2fa (image=quay.io/ceph/ceph:v19, name=modest_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 09:34:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v10: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.9 KiB/s wr, 17 op/s
Nov 25 09:34:57 compute-0 systemd[1]: Started libpod-conmon-ecc985fe6be93bb1ff7c337bc022ae9e6379b50f40d9552cb41ebbeb0fe7b2fa.scope.
Nov 25 09:34:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d369856412f85e5db7f9ad75db998497e2b0fa79657c6634fa815e8d4f188c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d369856412f85e5db7f9ad75db998497e2b0fa79657c6634fa815e8d4f188c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:57 compute-0 podman[101149]: 2025-11-25 09:34:57.718512275 +0000 UTC m=+0.089519097 container init ecc985fe6be93bb1ff7c337bc022ae9e6379b50f40d9552cb41ebbeb0fe7b2fa (image=quay.io/ceph/ceph:v19, name=modest_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:34:57 compute-0 podman[101149]: 2025-11-25 09:34:57.723311774 +0000 UTC m=+0.094318576 container start ecc985fe6be93bb1ff7c337bc022ae9e6379b50f40d9552cb41ebbeb0fe7b2fa (image=quay.io/ceph/ceph:v19, name=modest_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:34:57 compute-0 podman[101149]: 2025-11-25 09:34:57.724270501 +0000 UTC m=+0.095277304 container attach ecc985fe6be93bb1ff7c337bc022ae9e6379b50f40d9552cb41ebbeb0fe7b2fa (image=quay.io/ceph/ceph:v19, name=modest_fermi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:34:57 compute-0 ceph-mon[74207]: pgmap v10: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.9 KiB/s wr, 17 op/s
Nov 25 09:34:57 compute-0 podman[101149]: 2025-11-25 09:34:57.645822117 +0000 UTC m=+0.016828939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 25 09:34:57 compute-0 podman[101189]: 2025-11-25 09:34:57.781095047 +0000 UTC m=+0.033139762 container create be1ef21e9d3b517c655ee8861288239866b89b1abe39c22b715603bbc8ddbef4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_chatelet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 09:34:57 compute-0 systemd[1]: Started libpod-conmon-be1ef21e9d3b517c655ee8861288239866b89b1abe39c22b715603bbc8ddbef4.scope.
Nov 25 09:34:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:57 compute-0 podman[101189]: 2025-11-25 09:34:57.83462427 +0000 UTC m=+0.086668984 container init be1ef21e9d3b517c655ee8861288239866b89b1abe39c22b715603bbc8ddbef4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 09:34:57 compute-0 modest_fermi[101182]: {
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "user_id": "openstack",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "display_name": "openstack",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "email": "",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "suspended": 0,
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "max_buckets": 1000,
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "subusers": [],
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "keys": [
Nov 25 09:34:57 compute-0 modest_fermi[101182]:         {
Nov 25 09:34:57 compute-0 modest_fermi[101182]:             "user": "openstack",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:             "access_key": "1F21JUEJXFDVJW81U75Z",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:             "secret_key": "bz0iUlnwTb2dvlmYnx8syJYdi7NtEPxGA64vi6zO",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:             "active": true,
Nov 25 09:34:57 compute-0 modest_fermi[101182]:             "create_date": "2025-11-25T09:34:57.830539Z"
Nov 25 09:34:57 compute-0 modest_fermi[101182]:         }
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     ],
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "swift_keys": [],
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "caps": [],
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "op_mask": "read, write, delete",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "default_placement": "",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "default_storage_class": "",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "placement_tags": [],
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "bucket_quota": {
Nov 25 09:34:57 compute-0 modest_fermi[101182]:         "enabled": false,
Nov 25 09:34:57 compute-0 modest_fermi[101182]:         "check_on_raw": false,
Nov 25 09:34:57 compute-0 modest_fermi[101182]:         "max_size": -1,
Nov 25 09:34:57 compute-0 modest_fermi[101182]:         "max_size_kb": 0,
Nov 25 09:34:57 compute-0 modest_fermi[101182]:         "max_objects": -1
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     },
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "user_quota": {
Nov 25 09:34:57 compute-0 modest_fermi[101182]:         "enabled": false,
Nov 25 09:34:57 compute-0 modest_fermi[101182]:         "check_on_raw": false,
Nov 25 09:34:57 compute-0 modest_fermi[101182]:         "max_size": -1,
Nov 25 09:34:57 compute-0 modest_fermi[101182]:         "max_size_kb": 0,
Nov 25 09:34:57 compute-0 modest_fermi[101182]:         "max_objects": -1
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     },
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "temp_url_keys": [],
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "type": "rgw",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "mfa_ids": [],
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "account_id": "",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "path": "/",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "create_date": "2025-11-25T09:34:57.830363Z",
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "tags": [],
Nov 25 09:34:57 compute-0 modest_fermi[101182]:     "group_ids": []
Nov 25 09:34:57 compute-0 modest_fermi[101182]: }
Nov 25 09:34:57 compute-0 modest_fermi[101182]: 
Nov 25 09:34:57 compute-0 podman[101189]: 2025-11-25 09:34:57.838909559 +0000 UTC m=+0.090954274 container start be1ef21e9d3b517c655ee8861288239866b89b1abe39c22b715603bbc8ddbef4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_chatelet, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:34:57 compute-0 podman[101189]: 2025-11-25 09:34:57.83990719 +0000 UTC m=+0.091951904 container attach be1ef21e9d3b517c655ee8861288239866b89b1abe39c22b715603bbc8ddbef4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:34:57 compute-0 crazy_chatelet[101277]: 167 167
Nov 25 09:34:57 compute-0 systemd[1]: libpod-be1ef21e9d3b517c655ee8861288239866b89b1abe39c22b715603bbc8ddbef4.scope: Deactivated successfully.
Nov 25 09:34:57 compute-0 podman[101189]: 2025-11-25 09:34:57.841449227 +0000 UTC m=+0.093493952 container died be1ef21e9d3b517c655ee8861288239866b89b1abe39c22b715603bbc8ddbef4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:34:57 compute-0 podman[101189]: 2025-11-25 09:34:57.765820109 +0000 UTC m=+0.017864843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:57 compute-0 podman[101189]: 2025-11-25 09:34:57.863249278 +0000 UTC m=+0.115293993 container remove be1ef21e9d3b517c655ee8861288239866b89b1abe39c22b715603bbc8ddbef4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_chatelet, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:34:57 compute-0 systemd[1]: libpod-conmon-be1ef21e9d3b517c655ee8861288239866b89b1abe39c22b715603bbc8ddbef4.scope: Deactivated successfully.
Nov 25 09:34:57 compute-0 systemd[1]: libpod-ecc985fe6be93bb1ff7c337bc022ae9e6379b50f40d9552cb41ebbeb0fe7b2fa.scope: Deactivated successfully.
Nov 25 09:34:57 compute-0 podman[101149]: 2025-11-25 09:34:57.874747678 +0000 UTC m=+0.245754479 container died ecc985fe6be93bb1ff7c337bc022ae9e6379b50f40d9552cb41ebbeb0fe7b2fa (image=quay.io/ceph/ceph:v19, name=modest_fermi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 25 09:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-48d369856412f85e5db7f9ad75db998497e2b0fa79657c6634fa815e8d4f188c-merged.mount: Deactivated successfully.
Nov 25 09:34:57 compute-0 podman[101149]: 2025-11-25 09:34:57.899527747 +0000 UTC m=+0.270534549 container remove ecc985fe6be93bb1ff7c337bc022ae9e6379b50f40d9552cb41ebbeb0fe7b2fa (image=quay.io/ceph/ceph:v19, name=modest_fermi, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:34:57 compute-0 systemd[1]: libpod-conmon-ecc985fe6be93bb1ff7c337bc022ae9e6379b50f40d9552cb41ebbeb0fe7b2fa.scope: Deactivated successfully.
Nov 25 09:34:57 compute-0 sudo[101139]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:58 compute-0 podman[101317]: 2025-11-25 09:34:58.001433947 +0000 UTC m=+0.030081886 container create e25c910c8a7ca42aac76eb2ee340d69f804c9eb6c73b9551a69d48909d6779ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bell, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:34:58 compute-0 systemd[1]: Started libpod-conmon-e25c910c8a7ca42aac76eb2ee340d69f804c9eb6c73b9551a69d48909d6779ec.scope.
Nov 25 09:34:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fef0b389e3516a73f69a48ac67a9c721fdb4fb723c3aa22ddc09ce159061fcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fef0b389e3516a73f69a48ac67a9c721fdb4fb723c3aa22ddc09ce159061fcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fef0b389e3516a73f69a48ac67a9c721fdb4fb723c3aa22ddc09ce159061fcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fef0b389e3516a73f69a48ac67a9c721fdb4fb723c3aa22ddc09ce159061fcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:34:58 compute-0 podman[101317]: 2025-11-25 09:34:58.063385239 +0000 UTC m=+0.092033177 container init e25c910c8a7ca42aac76eb2ee340d69f804c9eb6c73b9551a69d48909d6779ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bell, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:34:58 compute-0 podman[101317]: 2025-11-25 09:34:58.069853204 +0000 UTC m=+0.098501131 container start e25c910c8a7ca42aac76eb2ee340d69f804c9eb6c73b9551a69d48909d6779ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:34:58 compute-0 podman[101317]: 2025-11-25 09:34:58.07137373 +0000 UTC m=+0.100021669 container attach e25c910c8a7ca42aac76eb2ee340d69f804c9eb6c73b9551a69d48909d6779ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:34:58 compute-0 podman[101317]: 2025-11-25 09:34:57.989400481 +0000 UTC m=+0.018048439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:34:58 compute-0 python3[101359]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:34:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:58.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:58 compute-0 ceph-mgr[74476]: [dashboard INFO request] [192.168.122.100:54700] [GET] [200] [0.102s] [6.3K] [edea2302-bd1f-443a-ac3a-8ed48e7cec4c] /
Nov 25 09:34:58 compute-0 gifted_bell[101331]: {}
Nov 25 09:34:58 compute-0 lvm[101435]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:34:58 compute-0 lvm[101435]: VG ceph_vg0 finished
Nov 25 09:34:58 compute-0 systemd[1]: libpod-e25c910c8a7ca42aac76eb2ee340d69f804c9eb6c73b9551a69d48909d6779ec.scope: Deactivated successfully.
Nov 25 09:34:58 compute-0 podman[101317]: 2025-11-25 09:34:58.603130641 +0000 UTC m=+0.631778589 container died e25c910c8a7ca42aac76eb2ee340d69f804c9eb6c73b9551a69d48909d6779ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bell, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fef0b389e3516a73f69a48ac67a9c721fdb4fb723c3aa22ddc09ce159061fcb-merged.mount: Deactivated successfully.
Nov 25 09:34:58 compute-0 podman[101317]: 2025-11-25 09:34:58.63222772 +0000 UTC m=+0.660875658 container remove e25c910c8a7ca42aac76eb2ee340d69f804c9eb6c73b9551a69d48909d6779ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:34:58 compute-0 systemd[1]: libpod-conmon-e25c910c8a7ca42aac76eb2ee340d69f804c9eb6c73b9551a69d48909d6779ec.scope: Deactivated successfully.
Nov 25 09:34:58 compute-0 sudo[101091]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:34:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:34:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 25 09:34:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:58 compute-0 sudo[101467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:34:58 compute-0 sudo[101468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:34:58 compute-0 sudo[101468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:58 compute-0 sudo[101467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:58 compute-0 sudo[101468]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:58 compute-0 sudo[101467]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:58 compute-0 python3[101456]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:34:58 compute-0 ceph-mgr[74476]: [dashboard INFO request] [192.168.122.100:54710] [GET] [200] [0.002s] [6.3K] [32c09b24-332f-41bc-b465-17cd6e50e254] /
Nov 25 09:34:58 compute-0 sudo[101517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:34:58 compute-0 sudo[101517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:58 compute-0 sudo[101517]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:58 compute-0 sudo[101542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:34:58 compute-0 sudo[101542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:34:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:34:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:34:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:34:59 compute-0 podman[101624]: 2025-11-25 09:34:59.31304263 +0000 UTC m=+0.040486583 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:34:59 compute-0 podman[101624]: 2025-11-25 09:34:59.394009272 +0000 UTC m=+0.121453225 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:34:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v11: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.9 KiB/s wr, 17 op/s
Nov 25 09:34:59 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:59 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:59 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:59 compute-0 podman[101736]: 2025-11-25 09:34:59.760017094 +0000 UTC m=+0.044408057 container exec dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:59 compute-0 podman[101758]: 2025-11-25 09:34:59.821038973 +0000 UTC m=+0.047962870 container exec_died dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:59 compute-0 podman[101736]: 2025-11-25 09:34:59.82750284 +0000 UTC m=+0.111893783 container exec_died dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:34:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:34:59 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:34:59 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 25 09:34:59 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 13 completed events
Nov 25 09:34:59 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:34:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:34:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:34:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:34:59 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-0 podman[101806]: 2025-11-25 09:35:00.057369083 +0000 UTC m=+0.041509913 container exec 26e220db1d5c7d27472c73e3f52d829b2b169c850bfd4cac7803406968b3e9da (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:00 compute-0 podman[101806]: 2025-11-25 09:35:00.080094109 +0000 UTC m=+0.064234918 container exec_died 26e220db1d5c7d27472c73e3f52d829b2b169c850bfd4cac7803406968b3e9da (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:35:00] "GET /metrics HTTP/1.1" 200 48297 "" "Prometheus/2.51.0"
Nov 25 09:35:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:35:00] "GET /metrics HTTP/1.1" 200 48297 "" "Prometheus/2.51.0"
Nov 25 09:35:00 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:35:00 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:35:00 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-0 podman[101864]: 2025-11-25 09:35:00.275260991 +0000 UTC m=+0.046034152 container exec e68646e3fd07566db62d42edd0b076924b9245803f1164555eac0bfb296d8565 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:00 compute-0 podman[101864]: 2025-11-25 09:35:00.39203833 +0000 UTC m=+0.162811471 container exec_died e68646e3fd07566db62d42edd0b076924b9245803f1164555eac0bfb296d8565 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:00.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:00 compute-0 podman[101922]: 2025-11-25 09:35:00.542005735 +0000 UTC m=+0.037123372 container exec e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:35:00 compute-0 podman[101922]: 2025-11-25 09:35:00.551060847 +0000 UTC m=+0.046178484 container exec_died e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:35:00 compute-0 ceph-mon[74207]: pgmap v11: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.9 KiB/s wr, 17 op/s
Nov 25 09:35:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:35:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:00 compute-0 podman[101972]: 2025-11-25 09:35:00.697139921 +0000 UTC m=+0.037665886 container exec 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, description=keepalived for Ceph, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20)
Nov 25 09:35:00 compute-0 podman[101989]: 2025-11-25 09:35:00.758050039 +0000 UTC m=+0.045267968 container exec_died 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, name=keepalived, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, io.openshift.tags=Ceph keepalived, version=2.2.4, release=1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 25 09:35:00 compute-0 podman[101972]: 2025-11-25 09:35:00.761387812 +0000 UTC m=+0.101913737 container exec_died 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, release=1793, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vendor=Red Hat, Inc.)
Nov 25 09:35:00 compute-0 podman[102023]: 2025-11-25 09:35:00.910664223 +0000 UTC m=+0.037155071 container exec 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:00 compute-0 podman[102023]: 2025-11-25 09:35:00.933085717 +0000 UTC m=+0.059576555 container exec_died 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:01 compute-0 podman[102069]: 2025-11-25 09:35:01.045726266 +0000 UTC m=+0.033967161 container exec f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 25 09:35:01 compute-0 podman[102069]: 2025-11-25 09:35:01.056034882 +0000 UTC m=+0.044275746 container exec_died f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:35:01 compute-0 sudo[101542]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:01.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:35:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:35:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:35:01 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:35:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:35:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:35:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v12: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.9 KiB/s wr, 18 op/s
Nov 25 09:35:01 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 25cae543-a326-42c5-b79e-55ae39fb20c6 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Nov 25 09:35:01 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.xlgqkq on compute-1
Nov 25 09:35:01 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.xlgqkq on compute-1
Nov 25 09:35:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:35:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:02 compute-0 ceph-mon[74207]: pgmap v12: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.9 KiB/s wr, 18 op/s
Nov 25 09:35:02 compute-0 ceph-mon[74207]: Deploying daemon haproxy.nfs.cephfs.compute-1.xlgqkq on compute-1
Nov 25 09:35:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): ingress.nfs.cephfs)
Nov 25 09:35:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 09:35:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:02.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:03.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:03 compute-0 ceph-mon[74207]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): ingress.nfs.cephfs)
Nov 25 09:35:03 compute-0 ceph-mon[74207]: Cluster is now healthy
Nov 25 09:35:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v13: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Nov 25 09:35:04 compute-0 ceph-mon[74207]: pgmap v13: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Nov 25 09:35:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:35:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:35:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 25 09:35:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:04 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.lycwwd on compute-0
Nov 25 09:35:04 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.lycwwd on compute-0
Nov 25 09:35:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:04.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:04 compute-0 sudo[102097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:04 compute-0 sudo[102097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:04 compute-0 sudo[102097]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:04 compute-0 sudo[102122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:35:04 compute-0 sudo[102122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:04 compute-0 podman[102181]: 2025-11-25 09:35:04.796641782 +0000 UTC m=+0.027044239 container create 50ce096b0773bb5de2a000686015ad51fa91701f1881e8844ad2a5bd67725da8 (image=quay.io/ceph/haproxy:2.3, name=epic_dewdney)
Nov 25 09:35:04 compute-0 systemd[1]: Started libpod-conmon-50ce096b0773bb5de2a000686015ad51fa91701f1881e8844ad2a5bd67725da8.scope.
Nov 25 09:35:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:04 compute-0 podman[102181]: 2025-11-25 09:35:04.842095269 +0000 UTC m=+0.072497716 container init 50ce096b0773bb5de2a000686015ad51fa91701f1881e8844ad2a5bd67725da8 (image=quay.io/ceph/haproxy:2.3, name=epic_dewdney)
Nov 25 09:35:04 compute-0 podman[102181]: 2025-11-25 09:35:04.846746829 +0000 UTC m=+0.077149277 container start 50ce096b0773bb5de2a000686015ad51fa91701f1881e8844ad2a5bd67725da8 (image=quay.io/ceph/haproxy:2.3, name=epic_dewdney)
Nov 25 09:35:04 compute-0 podman[102181]: 2025-11-25 09:35:04.848081696 +0000 UTC m=+0.078484143 container attach 50ce096b0773bb5de2a000686015ad51fa91701f1881e8844ad2a5bd67725da8 (image=quay.io/ceph/haproxy:2.3, name=epic_dewdney)
Nov 25 09:35:04 compute-0 epic_dewdney[102194]: 0 0
Nov 25 09:35:04 compute-0 systemd[1]: libpod-50ce096b0773bb5de2a000686015ad51fa91701f1881e8844ad2a5bd67725da8.scope: Deactivated successfully.
Nov 25 09:35:04 compute-0 conmon[102194]: conmon 50ce096b0773bb5de2a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50ce096b0773bb5de2a000686015ad51fa91701f1881e8844ad2a5bd67725da8.scope/container/memory.events
Nov 25 09:35:04 compute-0 podman[102181]: 2025-11-25 09:35:04.849851794 +0000 UTC m=+0.080254240 container died 50ce096b0773bb5de2a000686015ad51fa91701f1881e8844ad2a5bd67725da8 (image=quay.io/ceph/haproxy:2.3, name=epic_dewdney)
Nov 25 09:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e04306372c8e8678d923fd5c3d048cadeb318908f02e0aaf7e3b5c83619acaab-merged.mount: Deactivated successfully.
Nov 25 09:35:04 compute-0 podman[102181]: 2025-11-25 09:35:04.868823992 +0000 UTC m=+0.099226440 container remove 50ce096b0773bb5de2a000686015ad51fa91701f1881e8844ad2a5bd67725da8 (image=quay.io/ceph/haproxy:2.3, name=epic_dewdney)
Nov 25 09:35:04 compute-0 podman[102181]: 2025-11-25 09:35:04.78526367 +0000 UTC m=+0.015666137 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 25 09:35:04 compute-0 systemd[1]: libpod-conmon-50ce096b0773bb5de2a000686015ad51fa91701f1881e8844ad2a5bd67725da8.scope: Deactivated successfully.
Nov 25 09:35:04 compute-0 systemd[1]: Reloading.
Nov 25 09:35:04 compute-0 systemd-rc-local-generator[102234]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:35:04 compute-0 systemd-sysv-generator[102237]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:35:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:05.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v14: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Nov 25 09:35:05 compute-0 systemd[1]: Reloading.
Nov 25 09:35:05 compute-0 systemd-rc-local-generator[102274]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:35:05 compute-0 systemd-sysv-generator[102278]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:35:05 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.lycwwd for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:05 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:05 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:05 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:05 compute-0 ceph-mon[74207]: Deploying daemon haproxy.nfs.cephfs.compute-0.lycwwd on compute-0
Nov 25 09:35:05 compute-0 podman[102329]: 2025-11-25 09:35:05.498870566 +0000 UTC m=+0.026958397 container create 794722e6813b8003932e5d21ee2469f7567ab69d5d7075cb4a93b8f7f64dca52 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd)
Nov 25 09:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4b4c51f59774ee2973f01c4abd236b797e62169f4352bb6187d26cc86298de/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:05 compute-0 podman[102329]: 2025-11-25 09:35:05.53599557 +0000 UTC m=+0.064083411 container init 794722e6813b8003932e5d21ee2469f7567ab69d5d7075cb4a93b8f7f64dca52 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd)
Nov 25 09:35:05 compute-0 podman[102329]: 2025-11-25 09:35:05.539215491 +0000 UTC m=+0.067303322 container start 794722e6813b8003932e5d21ee2469f7567ab69d5d7075cb4a93b8f7f64dca52 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd)
Nov 25 09:35:05 compute-0 bash[102329]: 794722e6813b8003932e5d21ee2469f7567ab69d5d7075cb4a93b8f7f64dca52
Nov 25 09:35:05 compute-0 podman[102329]: 2025-11-25 09:35:05.487773504 +0000 UTC m=+0.015861356 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 25 09:35:05 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.lycwwd for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [NOTICE] 328/093505 (2) : New worker #1 (4) forked
Nov 25 09:35:05 compute-0 sudo[102122]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:35:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:35:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 25 09:35:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:05 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.flyakz on compute-2
Nov 25 09:35:05 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.flyakz on compute-2
Nov 25 09:35:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:05 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c0000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:06.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:06 compute-0 ceph-mon[74207]: pgmap v14: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Nov 25 09:35:06 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:06 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:06 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:06 compute-0 ceph-mon[74207]: Deploying daemon haproxy.nfs.cephfs.compute-2.flyakz on compute-2
Nov 25 09:35:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:35:06 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:35:06 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 25 09:35:06 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Nov 25 09:35:06 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:06 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:35:06 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:35:06 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 09:35:06 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 09:35:06 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:35:06 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:35:06 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.kkgeot on compute-0
Nov 25 09:35:06 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.kkgeot on compute-0
Nov 25 09:35:06 compute-0 sudo[102353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:06 compute-0 sudo[102353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:06 compute-0 sudo[102353]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:06 compute-0 sudo[102378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:35:06 compute-0 sudo[102378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:06 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc0020f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:07.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v15: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Nov 25 09:35:07 compute-0 podman[102437]: 2025-11-25 09:35:07.122769832 +0000 UTC m=+0.027236782 container create 14a6bac6fcacab7000f695f30a3f5323714dc9c1c1e8744cc3ee6b25b5c3c1d3 (image=quay.io/ceph/keepalived:2.2.4, name=beautiful_archimedes, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., distribution-scope=public, release=1793, io.openshift.tags=Ceph keepalived, name=keepalived, com.redhat.component=keepalived-container, version=2.2.4, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 25 09:35:07 compute-0 systemd[1]: Started libpod-conmon-14a6bac6fcacab7000f695f30a3f5323714dc9c1c1e8744cc3ee6b25b5c3c1d3.scope.
Nov 25 09:35:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:07 compute-0 podman[102437]: 2025-11-25 09:35:07.178038896 +0000 UTC m=+0.082505856 container init 14a6bac6fcacab7000f695f30a3f5323714dc9c1c1e8744cc3ee6b25b5c3c1d3 (image=quay.io/ceph/keepalived:2.2.4, name=beautiful_archimedes, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.component=keepalived-container, architecture=x86_64, io.buildah.version=1.28.2, vendor=Red Hat, Inc., release=1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph)
Nov 25 09:35:07 compute-0 podman[102437]: 2025-11-25 09:35:07.182160265 +0000 UTC m=+0.086627215 container start 14a6bac6fcacab7000f695f30a3f5323714dc9c1c1e8744cc3ee6b25b5c3c1d3 (image=quay.io/ceph/keepalived:2.2.4, name=beautiful_archimedes, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 09:35:07 compute-0 podman[102437]: 2025-11-25 09:35:07.183885939 +0000 UTC m=+0.088352889 container attach 14a6bac6fcacab7000f695f30a3f5323714dc9c1c1e8744cc3ee6b25b5c3c1d3 (image=quay.io/ceph/keepalived:2.2.4, name=beautiful_archimedes, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, version=2.2.4, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., description=keepalived for Ceph, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 25 09:35:07 compute-0 beautiful_archimedes[102450]: 0 0
Nov 25 09:35:07 compute-0 systemd[1]: libpod-14a6bac6fcacab7000f695f30a3f5323714dc9c1c1e8744cc3ee6b25b5c3c1d3.scope: Deactivated successfully.
Nov 25 09:35:07 compute-0 podman[102437]: 2025-11-25 09:35:07.185539496 +0000 UTC m=+0.090006447 container died 14a6bac6fcacab7000f695f30a3f5323714dc9c1c1e8744cc3ee6b25b5c3c1d3 (image=quay.io/ceph/keepalived:2.2.4, name=beautiful_archimedes, build-date=2023-02-22T09:23:20, distribution-scope=public, description=keepalived for Ceph, name=keepalived, io.openshift.tags=Ceph keepalived, vcs-type=git, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, architecture=x86_64, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 09:35:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f36c72079d443eabb057368cfab7927d9101e3063299cfe9b1720a66afa5eee5-merged.mount: Deactivated successfully.
Nov 25 09:35:07 compute-0 podman[102437]: 2025-11-25 09:35:07.204598059 +0000 UTC m=+0.109065009 container remove 14a6bac6fcacab7000f695f30a3f5323714dc9c1c1e8744cc3ee6b25b5c3c1d3 (image=quay.io/ceph/keepalived:2.2.4, name=beautiful_archimedes, release=1793, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.28.2, distribution-scope=public, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph)
Nov 25 09:35:07 compute-0 podman[102437]: 2025-11-25 09:35:07.111479576 +0000 UTC m=+0.015946546 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 25 09:35:07 compute-0 systemd[1]: libpod-conmon-14a6bac6fcacab7000f695f30a3f5323714dc9c1c1e8744cc3ee6b25b5c3c1d3.scope: Deactivated successfully.
Nov 25 09:35:07 compute-0 systemd[1]: Reloading.
Nov 25 09:35:07 compute-0 systemd-rc-local-generator[102490]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:35:07 compute-0 systemd-sysv-generator[102493]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:35:07 compute-0 systemd[1]: Reloading.
Nov 25 09:35:07 compute-0 systemd-sysv-generator[102533]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:35:07 compute-0 systemd-rc-local-generator[102530]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:35:07 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.kkgeot for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:07 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc002bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:07 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:07 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:07 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:07 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:07 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:35:07 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 09:35:07 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:35:07 compute-0 ceph-mon[74207]: Deploying daemon keepalived.nfs.cephfs.compute-0.kkgeot on compute-0
Nov 25 09:35:07 compute-0 ceph-mon[74207]: pgmap v15: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Nov 25 09:35:07 compute-0 podman[102586]: 2025-11-25 09:35:07.804366817 +0000 UTC m=+0.027806867 container create 2c711b03f81cf01a6739fdab0c479b67b4565c8cc6be794932bc35940d06a1e3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-type=git, description=keepalived for Ceph, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.buildah.version=1.28.2, io.openshift.expose-services=)
Nov 25 09:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd13d3914f1d73a0ee31233719eb53f618f76a4ceda178df859665785059ee9/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:07 compute-0 podman[102586]: 2025-11-25 09:35:07.839390631 +0000 UTC m=+0.062830691 container init 2c711b03f81cf01a6739fdab0c479b67b4565c8cc6be794932bc35940d06a1e3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., name=keepalived, io.buildah.version=1.28.2, description=keepalived for Ceph, distribution-scope=public, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20)
Nov 25 09:35:07 compute-0 podman[102586]: 2025-11-25 09:35:07.84395205 +0000 UTC m=+0.067392101 container start 2c711b03f81cf01a6739fdab0c479b67b4565c8cc6be794932bc35940d06a1e3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, release=1793, com.redhat.component=keepalived-container, vcs-type=git, name=keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, distribution-scope=public)
Nov 25 09:35:07 compute-0 bash[102586]: 2c711b03f81cf01a6739fdab0c479b67b4565c8cc6be794932bc35940d06a1e3
Nov 25 09:35:07 compute-0 podman[102586]: 2025-11-25 09:35:07.792853711 +0000 UTC m=+0.016293771 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 25 09:35:07 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.kkgeot for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot[102598]: Tue Nov 25 09:35:07 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 25 09:35:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot[102598]: Tue Nov 25 09:35:07 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 25 09:35:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot[102598]: Tue Nov 25 09:35:07 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 25 09:35:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot[102598]: Tue Nov 25 09:35:07 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 25 09:35:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot[102598]: Tue Nov 25 09:35:07 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Nov 25 09:35:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot[102598]: Tue Nov 25 09:35:07 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 25 09:35:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot[102598]: Tue Nov 25 09:35:07 2025: Starting VRRP child process, pid=4
Nov 25 09:35:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot[102598]: Tue Nov 25 09:35:07 2025: Startup complete
Nov 25 09:35:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:35:07 2025: (VI_0) Entering BACKUP STATE
Nov 25 09:35:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot[102598]: Tue Nov 25 09:35:07 2025: (VI_0) Entering BACKUP STATE (init)
Nov 25 09:35:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot[102598]: Tue Nov 25 09:35:07 2025: VRRP_Script(check_backend) succeeded
Nov 25 09:35:07 compute-0 sudo[102378]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:35:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:35:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 25 09:35:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:07 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:35:07 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:35:07 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:35:07 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:35:07 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 09:35:07 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 09:35:07 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.opynes on compute-2
Nov 25 09:35:07 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.opynes on compute-2
Nov 25 09:35:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:08 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c0001930 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:08.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:35:08 2025: (VI_0) Entering MASTER STATE
Nov 25 09:35:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:08 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c0001930 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:08 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:35:08 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:35:08 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 09:35:08 compute-0 ceph-mon[74207]: Deploying daemon keepalived.nfs.cephfs.compute-2.opynes on compute-2
Nov 25 09:35:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:35:09 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:35:09 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 25 09:35:09 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:09 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 09:35:09 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 09:35:09 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:35:09 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:35:09 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:35:09 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:35:09 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.adsqcr on compute-1
Nov 25 09:35:09 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.adsqcr on compute-1
Nov 25 09:35:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:09.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v16: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 178 B/s wr, 2 op/s
Nov 25 09:35:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:09 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c0001930 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:10 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc003510 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:10 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 09:35:10 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 09:35:10 compute-0 ceph-mon[74207]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 09:35:10 compute-0 ceph-mon[74207]: Deploying daemon keepalived.nfs.cephfs.compute-1.adsqcr on compute-1
Nov 25 09:35:10 compute-0 ceph-mon[74207]: pgmap v16: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 178 B/s wr, 2 op/s
Nov 25 09:35:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:35:10] "GET /metrics HTTP/1.1" 200 48297 "" "Prometheus/2.51.0"
Nov 25 09:35:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:35:10] "GET /metrics HTTP/1.1" 200 48297 "" "Prometheus/2.51.0"
Nov 25 09:35:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:10.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:10 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c0001930 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:11.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v17: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 178 B/s wr, 2 op/s
Nov 25 09:35:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot[102598]: Tue Nov 25 09:35:11 2025: (VI_0) Entering MASTER STATE
Nov 25 09:35:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093511 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:35:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:11 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c0001930 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:12 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c0001930 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:12 compute-0 ceph-mon[74207]: pgmap v17: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 178 B/s wr, 2 op/s
Nov 25 09:35:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:12.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:12 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c0001930 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:13.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v18: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:35:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:35:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:35:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 25 09:35:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:13 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev 25cae543-a326-42c5-b79e-55ae39fb20c6 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Nov 25 09:35:13 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 25cae543-a326-42c5-b79e-55ae39fb20c6 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 12 seconds
Nov 25 09:35:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 25 09:35:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:35:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:35:13 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:35:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:35:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:35:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:35:13 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:13 compute-0 sudo[102610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:13 compute-0 sudo[102610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:13 compute-0 sudo[102610]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:13 compute-0 sudo[102635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:35:13 compute-0 sudo[102635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:13 compute-0 podman[102691]: 2025-11-25 09:35:13.553646797 +0000 UTC m=+0.025870638 container create 98ecff6a97e94060768e1b6143ea9deba6c4a7975717e7c61dd8752adb144888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_solomon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:35:13 compute-0 systemd[1]: Started libpod-conmon-98ecff6a97e94060768e1b6143ea9deba6c4a7975717e7c61dd8752adb144888.scope.
Nov 25 09:35:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:13 compute-0 podman[102691]: 2025-11-25 09:35:13.601910712 +0000 UTC m=+0.074134553 container init 98ecff6a97e94060768e1b6143ea9deba6c4a7975717e7c61dd8752adb144888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_solomon, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:35:13 compute-0 podman[102691]: 2025-11-25 09:35:13.606218354 +0000 UTC m=+0.078442195 container start 98ecff6a97e94060768e1b6143ea9deba6c4a7975717e7c61dd8752adb144888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:35:13 compute-0 podman[102691]: 2025-11-25 09:35:13.607154579 +0000 UTC m=+0.079378430 container attach 98ecff6a97e94060768e1b6143ea9deba6c4a7975717e7c61dd8752adb144888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_solomon, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:35:13 compute-0 admiring_solomon[102704]: 167 167
Nov 25 09:35:13 compute-0 systemd[1]: libpod-98ecff6a97e94060768e1b6143ea9deba6c4a7975717e7c61dd8752adb144888.scope: Deactivated successfully.
Nov 25 09:35:13 compute-0 conmon[102704]: conmon 98ecff6a97e94060768e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98ecff6a97e94060768e1b6143ea9deba6c4a7975717e7c61dd8752adb144888.scope/container/memory.events
Nov 25 09:35:13 compute-0 podman[102691]: 2025-11-25 09:35:13.609989032 +0000 UTC m=+0.082212874 container died 98ecff6a97e94060768e1b6143ea9deba6c4a7975717e7c61dd8752adb144888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_solomon, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-892b8fdc7c27e5be83a81bc3a2031202bf0de7299c82748ee524f7bd78d856ba-merged.mount: Deactivated successfully.
Nov 25 09:35:13 compute-0 podman[102691]: 2025-11-25 09:35:13.638521397 +0000 UTC m=+0.110745238 container remove 98ecff6a97e94060768e1b6143ea9deba6c4a7975717e7c61dd8752adb144888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:35:13 compute-0 podman[102691]: 2025-11-25 09:35:13.542989494 +0000 UTC m=+0.015213345 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:13 compute-0 systemd[1]: libpod-conmon-98ecff6a97e94060768e1b6143ea9deba6c4a7975717e7c61dd8752adb144888.scope: Deactivated successfully.
Nov 25 09:35:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:13 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c0001930 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:13 compute-0 podman[102726]: 2025-11-25 09:35:13.746829388 +0000 UTC m=+0.024943689 container create c2a0f7da8a2b373727ea0facc6bd7a97f415d6f26332c8493a6f543b6e44f61f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:35:13 compute-0 systemd[1]: Started libpod-conmon-c2a0f7da8a2b373727ea0facc6bd7a97f415d6f26332c8493a6f543b6e44f61f.scope.
Nov 25 09:35:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6325ce49e5e2ca81f265ffabf97ca1081daac4cc568c1f39b93f18b84bfaabc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6325ce49e5e2ca81f265ffabf97ca1081daac4cc568c1f39b93f18b84bfaabc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6325ce49e5e2ca81f265ffabf97ca1081daac4cc568c1f39b93f18b84bfaabc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6325ce49e5e2ca81f265ffabf97ca1081daac4cc568c1f39b93f18b84bfaabc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6325ce49e5e2ca81f265ffabf97ca1081daac4cc568c1f39b93f18b84bfaabc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:13 compute-0 podman[102726]: 2025-11-25 09:35:13.808369465 +0000 UTC m=+0.086483765 container init c2a0f7da8a2b373727ea0facc6bd7a97f415d6f26332c8493a6f543b6e44f61f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:35:13 compute-0 podman[102726]: 2025-11-25 09:35:13.813144918 +0000 UTC m=+0.091259208 container start c2a0f7da8a2b373727ea0facc6bd7a97f415d6f26332c8493a6f543b6e44f61f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:35:13 compute-0 podman[102726]: 2025-11-25 09:35:13.818921429 +0000 UTC m=+0.097035740 container attach c2a0f7da8a2b373727ea0facc6bd7a97f415d6f26332c8493a6f543b6e44f61f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:35:13 compute-0 podman[102726]: 2025-11-25 09:35:13.73669441 +0000 UTC m=+0.014808731 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:14 compute-0 sweet_tu[102739]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:35:14 compute-0 sweet_tu[102739]: --> All data devices are unavailable
Nov 25 09:35:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:14 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc003e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:14 compute-0 systemd[1]: libpod-c2a0f7da8a2b373727ea0facc6bd7a97f415d6f26332c8493a6f543b6e44f61f.scope: Deactivated successfully.
Nov 25 09:35:14 compute-0 podman[102726]: 2025-11-25 09:35:14.053464249 +0000 UTC m=+0.331578551 container died c2a0f7da8a2b373727ea0facc6bd7a97f415d6f26332c8493a6f543b6e44f61f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_tu, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:35:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6325ce49e5e2ca81f265ffabf97ca1081daac4cc568c1f39b93f18b84bfaabc-merged.mount: Deactivated successfully.
Nov 25 09:35:14 compute-0 podman[102726]: 2025-11-25 09:35:14.073775704 +0000 UTC m=+0.351890005 container remove c2a0f7da8a2b373727ea0facc6bd7a97f415d6f26332c8493a6f543b6e44f61f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_tu, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:35:14 compute-0 systemd[1]: libpod-conmon-c2a0f7da8a2b373727ea0facc6bd7a97f415d6f26332c8493a6f543b6e44f61f.scope: Deactivated successfully.
Nov 25 09:35:14 compute-0 sudo[102635]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:14 compute-0 sudo[102767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:14 compute-0 sudo[102767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:14 compute-0 sudo[102767]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:14 compute-0 sudo[102792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:35:14 compute-0 sudo[102792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:14 compute-0 ceph-mon[74207]: pgmap v18: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:35:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:35:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:35:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:14.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:14 compute-0 podman[102848]: 2025-11-25 09:35:14.464692238 +0000 UTC m=+0.028841338 container create 211900faaf4b7696f5c066a676e0dc8107d89e1f8ce0ae1de69c0e282e5e638e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_buck, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:35:14 compute-0 systemd[1]: Started libpod-conmon-211900faaf4b7696f5c066a676e0dc8107d89e1f8ce0ae1de69c0e282e5e638e.scope.
Nov 25 09:35:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:14 compute-0 podman[102848]: 2025-11-25 09:35:14.510507347 +0000 UTC m=+0.074656458 container init 211900faaf4b7696f5c066a676e0dc8107d89e1f8ce0ae1de69c0e282e5e638e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_buck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:35:14 compute-0 podman[102848]: 2025-11-25 09:35:14.514667711 +0000 UTC m=+0.078816812 container start 211900faaf4b7696f5c066a676e0dc8107d89e1f8ce0ae1de69c0e282e5e638e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 25 09:35:14 compute-0 podman[102848]: 2025-11-25 09:35:14.515725705 +0000 UTC m=+0.079874806 container attach 211900faaf4b7696f5c066a676e0dc8107d89e1f8ce0ae1de69c0e282e5e638e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:35:14 compute-0 happy_buck[102861]: 167 167
Nov 25 09:35:14 compute-0 systemd[1]: libpod-211900faaf4b7696f5c066a676e0dc8107d89e1f8ce0ae1de69c0e282e5e638e.scope: Deactivated successfully.
Nov 25 09:35:14 compute-0 podman[102848]: 2025-11-25 09:35:14.517800427 +0000 UTC m=+0.081949518 container died 211900faaf4b7696f5c066a676e0dc8107d89e1f8ce0ae1de69c0e282e5e638e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:35:14 compute-0 podman[102848]: 2025-11-25 09:35:14.534336103 +0000 UTC m=+0.098485203 container remove 211900faaf4b7696f5c066a676e0dc8107d89e1f8ce0ae1de69c0e282e5e638e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_buck, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 09:35:14 compute-0 podman[102848]: 2025-11-25 09:35:14.452486496 +0000 UTC m=+0.016635617 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:14 compute-0 systemd[1]: libpod-conmon-211900faaf4b7696f5c066a676e0dc8107d89e1f8ce0ae1de69c0e282e5e638e.scope: Deactivated successfully.
Nov 25 09:35:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1877bb4e7866f0982de4f7ed77c126564f701f7888c6cfb4cb41b699c9505c5-merged.mount: Deactivated successfully.
Nov 25 09:35:14 compute-0 podman[102882]: 2025-11-25 09:35:14.644354939 +0000 UTC m=+0.027641415 container create aa7c321f8ed13e00b4d3ebff2e48a0f42b86dc436b20d6680d0a4a2fd039c1fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:35:14 compute-0 systemd[1]: Started libpod-conmon-aa7c321f8ed13e00b4d3ebff2e48a0f42b86dc436b20d6680d0a4a2fd039c1fa.scope.
Nov 25 09:35:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7da3dea610ef715eabaea2f0ac918e8979570e2ff6d095557a14a44d7418a2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7da3dea610ef715eabaea2f0ac918e8979570e2ff6d095557a14a44d7418a2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7da3dea610ef715eabaea2f0ac918e8979570e2ff6d095557a14a44d7418a2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7da3dea610ef715eabaea2f0ac918e8979570e2ff6d095557a14a44d7418a2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:14 compute-0 podman[102882]: 2025-11-25 09:35:14.700574935 +0000 UTC m=+0.083861421 container init aa7c321f8ed13e00b4d3ebff2e48a0f42b86dc436b20d6680d0a4a2fd039c1fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 09:35:14 compute-0 podman[102882]: 2025-11-25 09:35:14.705333397 +0000 UTC m=+0.088619873 container start aa7c321f8ed13e00b4d3ebff2e48a0f42b86dc436b20d6680d0a4a2fd039c1fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:35:14 compute-0 podman[102882]: 2025-11-25 09:35:14.706452095 +0000 UTC m=+0.089738572 container attach aa7c321f8ed13e00b4d3ebff2e48a0f42b86dc436b20d6680d0a4a2fd039c1fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_shamir, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 25 09:35:14 compute-0 podman[102882]: 2025-11-25 09:35:14.633227711 +0000 UTC m=+0.016514197 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:14 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc003e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:14 compute-0 great_shamir[102895]: {
Nov 25 09:35:14 compute-0 great_shamir[102895]:     "1": [
Nov 25 09:35:14 compute-0 great_shamir[102895]:         {
Nov 25 09:35:14 compute-0 great_shamir[102895]:             "devices": [
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "/dev/loop3"
Nov 25 09:35:14 compute-0 great_shamir[102895]:             ],
Nov 25 09:35:14 compute-0 great_shamir[102895]:             "lv_name": "ceph_lv0",
Nov 25 09:35:14 compute-0 great_shamir[102895]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:35:14 compute-0 great_shamir[102895]:             "lv_size": "21470642176",
Nov 25 09:35:14 compute-0 great_shamir[102895]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:35:14 compute-0 great_shamir[102895]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:35:14 compute-0 great_shamir[102895]:             "name": "ceph_lv0",
Nov 25 09:35:14 compute-0 great_shamir[102895]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:35:14 compute-0 great_shamir[102895]:             "tags": {
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.cluster_name": "ceph",
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.crush_device_class": "",
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.encrypted": "0",
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.osd_id": "1",
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.type": "block",
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.vdo": "0",
Nov 25 09:35:14 compute-0 great_shamir[102895]:                 "ceph.with_tpm": "0"
Nov 25 09:35:14 compute-0 great_shamir[102895]:             },
Nov 25 09:35:14 compute-0 great_shamir[102895]:             "type": "block",
Nov 25 09:35:14 compute-0 great_shamir[102895]:             "vg_name": "ceph_vg0"
Nov 25 09:35:14 compute-0 great_shamir[102895]:         }
Nov 25 09:35:14 compute-0 great_shamir[102895]:     ]
Nov 25 09:35:14 compute-0 great_shamir[102895]: }
Nov 25 09:35:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 25 09:35:14 compute-0 systemd[1]: libpod-aa7c321f8ed13e00b4d3ebff2e48a0f42b86dc436b20d6680d0a4a2fd039c1fa.scope: Deactivated successfully.
Nov 25 09:35:14 compute-0 podman[102882]: 2025-11-25 09:35:14.942407739 +0000 UTC m=+0.325694216 container died aa7c321f8ed13e00b4d3ebff2e48a0f42b86dc436b20d6680d0a4a2fd039c1fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_shamir, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:35:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:35:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:35:14 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 14 completed events
Nov 25 09:35:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:35:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:35:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:35:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:35:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:35:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7da3dea610ef715eabaea2f0ac918e8979570e2ff6d095557a14a44d7418a2a-merged.mount: Deactivated successfully.
Nov 25 09:35:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:35:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:35:14 compute-0 podman[102882]: 2025-11-25 09:35:14.96845591 +0000 UTC m=+0.351742386 container remove aa7c321f8ed13e00b4d3ebff2e48a0f42b86dc436b20d6680d0a4a2fd039c1fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_shamir, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 09:35:14 compute-0 systemd[1]: libpod-conmon-aa7c321f8ed13e00b4d3ebff2e48a0f42b86dc436b20d6680d0a4a2fd039c1fa.scope: Deactivated successfully.
Nov 25 09:35:15 compute-0 sudo[102792]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:15 compute-0 sudo[102914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:15 compute-0 sudo[102914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:15 compute-0 sudo[102914]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:15 compute-0 sudo[102939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:35:15 compute-0 sudo[102939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:15.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v19: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:35:15 compute-0 podman[102995]: 2025-11-25 09:35:15.35499982 +0000 UTC m=+0.026622364 container create 0639d4a3b8df2baa8f5c350c4af566179109806df67842ad45774e9dfc2fa51f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dhawan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 25 09:35:15 compute-0 systemd[1]: Started libpod-conmon-0639d4a3b8df2baa8f5c350c4af566179109806df67842ad45774e9dfc2fa51f.scope.
Nov 25 09:35:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:15 compute-0 podman[102995]: 2025-11-25 09:35:15.396124125 +0000 UTC m=+0.067746679 container init 0639d4a3b8df2baa8f5c350c4af566179109806df67842ad45774e9dfc2fa51f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dhawan, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:35:15 compute-0 podman[102995]: 2025-11-25 09:35:15.400949562 +0000 UTC m=+0.072572107 container start 0639d4a3b8df2baa8f5c350c4af566179109806df67842ad45774e9dfc2fa51f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dhawan, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:35:15 compute-0 podman[102995]: 2025-11-25 09:35:15.401988432 +0000 UTC m=+0.073610996 container attach 0639d4a3b8df2baa8f5c350c4af566179109806df67842ad45774e9dfc2fa51f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dhawan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:35:15 compute-0 jolly_dhawan[103008]: 167 167
Nov 25 09:35:15 compute-0 systemd[1]: libpod-0639d4a3b8df2baa8f5c350c4af566179109806df67842ad45774e9dfc2fa51f.scope: Deactivated successfully.
Nov 25 09:35:15 compute-0 podman[102995]: 2025-11-25 09:35:15.403560916 +0000 UTC m=+0.075183460 container died 0639d4a3b8df2baa8f5c350c4af566179109806df67842ad45774e9dfc2fa51f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcad521d8b6f10e70fcad845fd0aaed9459d9d1b55ccc50aedf943d96af56d4e-merged.mount: Deactivated successfully.
Nov 25 09:35:15 compute-0 podman[102995]: 2025-11-25 09:35:15.424190981 +0000 UTC m=+0.095813525 container remove 0639d4a3b8df2baa8f5c350c4af566179109806df67842ad45774e9dfc2fa51f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dhawan, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 09:35:15 compute-0 podman[102995]: 2025-11-25 09:35:15.344436605 +0000 UTC m=+0.016059169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:15 compute-0 systemd[1]: libpod-conmon-0639d4a3b8df2baa8f5c350c4af566179109806df67842ad45774e9dfc2fa51f.scope: Deactivated successfully.
Nov 25 09:35:15 compute-0 podman[103030]: 2025-11-25 09:35:15.536781216 +0000 UTC m=+0.029379882 container create 9705d5dba1baedb400a5e1ef7956b555936fbf10a3441dc828f5850557e2731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:35:15 compute-0 systemd[1]: Started libpod-conmon-9705d5dba1baedb400a5e1ef7956b555936fbf10a3441dc828f5850557e2731c.scope.
Nov 25 09:35:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c139f7729d87f120461126c92f013e2af83a071814108c63bf0707f32ef87df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c139f7729d87f120461126c92f013e2af83a071814108c63bf0707f32ef87df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c139f7729d87f120461126c92f013e2af83a071814108c63bf0707f32ef87df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c139f7729d87f120461126c92f013e2af83a071814108c63bf0707f32ef87df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:15 compute-0 podman[103030]: 2025-11-25 09:35:15.587020906 +0000 UTC m=+0.079619563 container init 9705d5dba1baedb400a5e1ef7956b555936fbf10a3441dc828f5850557e2731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keller, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:35:15 compute-0 podman[103030]: 2025-11-25 09:35:15.593098325 +0000 UTC m=+0.085696981 container start 9705d5dba1baedb400a5e1ef7956b555936fbf10a3441dc828f5850557e2731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:15 compute-0 podman[103030]: 2025-11-25 09:35:15.594215661 +0000 UTC m=+0.086814318 container attach 9705d5dba1baedb400a5e1ef7956b555936fbf10a3441dc828f5850557e2731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keller, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:35:15 compute-0 podman[103030]: 2025-11-25 09:35:15.523639138 +0000 UTC m=+0.016237815 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:15 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000a940 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:35:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:15 compute-0 ceph-mon[74207]: pgmap v19: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:35:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:16 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000a940 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:16 compute-0 lvm[103121]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:35:16 compute-0 lvm[103121]: VG ceph_vg0 finished
Nov 25 09:35:16 compute-0 jolly_keller[103044]: {}
Nov 25 09:35:16 compute-0 podman[103030]: 2025-11-25 09:35:16.093731365 +0000 UTC m=+0.586330022 container died 9705d5dba1baedb400a5e1ef7956b555936fbf10a3441dc828f5850557e2731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keller, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 09:35:16 compute-0 systemd[1]: libpod-9705d5dba1baedb400a5e1ef7956b555936fbf10a3441dc828f5850557e2731c.scope: Deactivated successfully.
Nov 25 09:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c139f7729d87f120461126c92f013e2af83a071814108c63bf0707f32ef87df-merged.mount: Deactivated successfully.
Nov 25 09:35:16 compute-0 podman[103030]: 2025-11-25 09:35:16.115514344 +0000 UTC m=+0.608113001 container remove 9705d5dba1baedb400a5e1ef7956b555936fbf10a3441dc828f5850557e2731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_keller, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:16 compute-0 systemd[1]: libpod-conmon-9705d5dba1baedb400a5e1ef7956b555936fbf10a3441dc828f5850557e2731c.scope: Deactivated successfully.
Nov 25 09:35:16 compute-0 sudo[102939]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:35:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:35:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:16 compute-0 sudo[103134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:35:16 compute-0 sudo[103134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:16 compute-0 sudo[103134]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:16 compute-0 sudo[103159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:16 compute-0 sudo[103159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:16 compute-0 sudo[103159]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:16 compute-0 sudo[103184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:35:16 compute-0 sudo[103184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:16.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:16 compute-0 podman[103266]: 2025-11-25 09:35:16.738938021 +0000 UTC m=+0.035364276 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:35:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs[97904]: Tue Nov 25 09:35:16 2025: (VI_0) received an invalid passwd!
Nov 25 09:35:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-0-kkgeot[102598]: Tue Nov 25 09:35:16 2025: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 100, forcing new election
Nov 25 09:35:16 compute-0 podman[103283]: 2025-11-25 09:35:16.867980844 +0000 UTC m=+0.045594784 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:35:16 compute-0 podman[103266]: 2025-11-25 09:35:16.871689217 +0000 UTC m=+0.168115482 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 09:35:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:16 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc003e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:17.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v20: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:35:17 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:17 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:17 compute-0 podman[103376]: 2025-11-25 09:35:17.198083222 +0000 UTC m=+0.033297460 container exec dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:17 compute-0 podman[103376]: 2025-11-25 09:35:17.205076806 +0000 UTC m=+0.040291034 container exec_died dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:17 compute-0 podman[103445]: 2025-11-25 09:35:17.388240207 +0000 UTC m=+0.032907834 container exec 26e220db1d5c7d27472c73e3f52d829b2b169c850bfd4cac7803406968b3e9da (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:17 compute-0 podman[103445]: 2025-11-25 09:35:17.40811571 +0000 UTC m=+0.052783338 container exec_died 26e220db1d5c7d27472c73e3f52d829b2b169c850bfd4cac7803406968b3e9da (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:17 compute-0 podman[103502]: 2025-11-25 09:35:17.542845887 +0000 UTC m=+0.032338681 container exec e68646e3fd07566db62d42edd0b076924b9245803f1164555eac0bfb296d8565 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:17 compute-0 podman[103502]: 2025-11-25 09:35:17.665191585 +0000 UTC m=+0.154684359 container exec_died e68646e3fd07566db62d42edd0b076924b9245803f1164555eac0bfb296d8565 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:35:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:35:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:17 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc003e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:17 compute-0 podman[103560]: 2025-11-25 09:35:17.80602026 +0000 UTC m=+0.037093345 container exec e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:35:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:35:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:35:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:17 compute-0 podman[103578]: 2025-11-25 09:35:17.864017025 +0000 UTC m=+0.047560912 container exec_died e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:35:17 compute-0 podman[103560]: 2025-11-25 09:35:17.867395845 +0000 UTC m=+0.098468910 container exec_died e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:35:17 compute-0 sshd-session[103586]: Accepted publickey for zuul from 192.168.122.30 port 60592 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:35:17 compute-0 systemd-logind[744]: New session 38 of user zuul.
Nov 25 09:35:17 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 25 09:35:17 compute-0 sshd-session[103586]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:35:18 compute-0 podman[103617]: 2025-11-25 09:35:18.016187664 +0000 UTC m=+0.039379277 container exec 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, description=keepalived for Ceph, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=Ceph keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, name=keepalived)
Nov 25 09:35:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:18 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000b860 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:18 compute-0 podman[103658]: 2025-11-25 09:35:18.076973317 +0000 UTC m=+0.045624821 container exec_died 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, version=2.2.4, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public)
Nov 25 09:35:18 compute-0 podman[103617]: 2025-11-25 09:35:18.079965969 +0000 UTC m=+0.103157582 container exec_died 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, description=keepalived for Ceph, io.buildah.version=1.28.2, vendor=Red Hat, Inc., vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, distribution-scope=public)
Nov 25 09:35:18 compute-0 ceph-mon[74207]: pgmap v20: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:35:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:18 compute-0 podman[103721]: 2025-11-25 09:35:18.216934385 +0000 UTC m=+0.033622643 container exec 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:18 compute-0 podman[103721]: 2025-11-25 09:35:18.243115698 +0000 UTC m=+0.059803954 container exec_died 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:18 compute-0 podman[103768]: 2025-11-25 09:35:18.359235727 +0000 UTC m=+0.035532574 container exec f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:35:18 compute-0 podman[103809]: 2025-11-25 09:35:18.419974121 +0000 UTC m=+0.046286187 container exec_died f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:35:18 compute-0 podman[103768]: 2025-11-25 09:35:18.42249788 +0000 UTC m=+0.098794727 container exec_died f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:35:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:18.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:18 compute-0 sudo[103184]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:35:18 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:35:18 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:35:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:35:18 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:35:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:35:18 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:35:18 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:35:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:35:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:35:18 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:35:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:35:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:18 compute-0 sudo[103923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:18 compute-0 sudo[103923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:18 compute-0 sudo[103923]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:18 compute-0 sudo[103948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:35:18 compute-0 sudo[103948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:18 compute-0 python3.9[103911]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:35:18 compute-0 sudo[103973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:35:18 compute-0 sudo[103973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:18 compute-0 sudo[103973]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:18 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000b860 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:18 compute-0 podman[104035]: 2025-11-25 09:35:18.977939084 +0000 UTC m=+0.029750241 container create 3c05a853e53599ebd84d895cdd77c6bebd8fb5b5bc7d7dbd15884f157ade4ae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:35:19 compute-0 systemd[1]: Started libpod-conmon-3c05a853e53599ebd84d895cdd77c6bebd8fb5b5bc7d7dbd15884f157ade4ae6.scope.
Nov 25 09:35:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:19 compute-0 podman[104035]: 2025-11-25 09:35:19.027743775 +0000 UTC m=+0.079554952 container init 3c05a853e53599ebd84d895cdd77c6bebd8fb5b5bc7d7dbd15884f157ade4ae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:35:19 compute-0 podman[104035]: 2025-11-25 09:35:19.032545337 +0000 UTC m=+0.084356484 container start 3c05a853e53599ebd84d895cdd77c6bebd8fb5b5bc7d7dbd15884f157ade4ae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:35:19 compute-0 podman[104035]: 2025-11-25 09:35:19.033450353 +0000 UTC m=+0.085261510 container attach 3c05a853e53599ebd84d895cdd77c6bebd8fb5b5bc7d7dbd15884f157ade4ae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:19 compute-0 optimistic_stonebraker[104055]: 167 167
Nov 25 09:35:19 compute-0 systemd[1]: libpod-3c05a853e53599ebd84d895cdd77c6bebd8fb5b5bc7d7dbd15884f157ade4ae6.scope: Deactivated successfully.
Nov 25 09:35:19 compute-0 podman[104035]: 2025-11-25 09:35:19.035425327 +0000 UTC m=+0.087236475 container died 3c05a853e53599ebd84d895cdd77c6bebd8fb5b5bc7d7dbd15884f157ade4ae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d18b511462d2e3a69c60b70a257191bd70b0795ef7e97ddf1b60e1566aa4e38-merged.mount: Deactivated successfully.
Nov 25 09:35:19 compute-0 podman[104035]: 2025-11-25 09:35:19.053156106 +0000 UTC m=+0.104967263 container remove 3c05a853e53599ebd84d895cdd77c6bebd8fb5b5bc7d7dbd15884f157ade4ae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 09:35:19 compute-0 podman[104035]: 2025-11-25 09:35:18.96588619 +0000 UTC m=+0.017697367 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:19 compute-0 systemd[1]: libpod-conmon-3c05a853e53599ebd84d895cdd77c6bebd8fb5b5bc7d7dbd15884f157ade4ae6.scope: Deactivated successfully.
Nov 25 09:35:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:19.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v21: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:35:19 compute-0 podman[104085]: 2025-11-25 09:35:19.164773036 +0000 UTC m=+0.029425018 container create 7fd384ec66e4410ed612ff68fdda569fc3fd02c173d4b957d4a5a55e969743b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:35:19 compute-0 systemd[1]: Started libpod-conmon-7fd384ec66e4410ed612ff68fdda569fc3fd02c173d4b957d4a5a55e969743b4.scope.
Nov 25 09:35:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8353ca5216902a0b13ec541aa3bfc3269f3e39bf974758cc4f63dd63bd0ea177/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8353ca5216902a0b13ec541aa3bfc3269f3e39bf974758cc4f63dd63bd0ea177/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8353ca5216902a0b13ec541aa3bfc3269f3e39bf974758cc4f63dd63bd0ea177/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8353ca5216902a0b13ec541aa3bfc3269f3e39bf974758cc4f63dd63bd0ea177/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8353ca5216902a0b13ec541aa3bfc3269f3e39bf974758cc4f63dd63bd0ea177/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:19 compute-0 podman[104085]: 2025-11-25 09:35:19.220136196 +0000 UTC m=+0.084788168 container init 7fd384ec66e4410ed612ff68fdda569fc3fd02c173d4b957d4a5a55e969743b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:35:19 compute-0 podman[104085]: 2025-11-25 09:35:19.226343409 +0000 UTC m=+0.090995380 container start 7fd384ec66e4410ed612ff68fdda569fc3fd02c173d4b957d4a5a55e969743b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:35:19 compute-0 podman[104085]: 2025-11-25 09:35:19.229946672 +0000 UTC m=+0.094598644 container attach 7fd384ec66e4410ed612ff68fdda569fc3fd02c173d4b957d4a5a55e969743b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_montalcini, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:35:19 compute-0 podman[104085]: 2025-11-25 09:35:19.153519469 +0000 UTC m=+0.018171461 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:19 compute-0 pedantic_montalcini[104098]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:35:19 compute-0 pedantic_montalcini[104098]: --> All data devices are unavailable
Nov 25 09:35:19 compute-0 systemd[1]: libpod-7fd384ec66e4410ed612ff68fdda569fc3fd02c173d4b957d4a5a55e969743b4.scope: Deactivated successfully.
Nov 25 09:35:19 compute-0 podman[104085]: 2025-11-25 09:35:19.489682341 +0000 UTC m=+0.354334323 container died 7fd384ec66e4410ed612ff68fdda569fc3fd02c173d4b957d4a5a55e969743b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 25 09:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8353ca5216902a0b13ec541aa3bfc3269f3e39bf974758cc4f63dd63bd0ea177-merged.mount: Deactivated successfully.
Nov 25 09:35:19 compute-0 podman[104085]: 2025-11-25 09:35:19.511780164 +0000 UTC m=+0.376432136 container remove 7fd384ec66e4410ed612ff68fdda569fc3fd02c173d4b957d4a5a55e969743b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_montalcini, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:19 compute-0 systemd[1]: libpod-conmon-7fd384ec66e4410ed612ff68fdda569fc3fd02c173d4b957d4a5a55e969743b4.scope: Deactivated successfully.
Nov 25 09:35:19 compute-0 sudo[103948]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:19 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:19 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:19 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:19 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:35:19 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:19 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:19 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:35:19 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:35:19 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:19 compute-0 sudo[104211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:19 compute-0 sudo[104211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:19 compute-0 sudo[104211]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:19 compute-0 sudo[104265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:35:19 compute-0 sudo[104265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:19 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc005320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:19 compute-0 sudo[104384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juypokyqmzcmgtyxhumobgcrigvwgksq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063319.5397034-56-248265281664226/AnsiballZ_command.py'
Nov 25 09:35:19 compute-0 sudo[104384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:19 compute-0 podman[104398]: 2025-11-25 09:35:19.914856724 +0000 UTC m=+0.026291751 container create c57bb930ed39930b1dc8f2a646c146337e069d988351d7283ba50a4a242db234 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wu, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:35:19 compute-0 systemd[1]: Started libpod-conmon-c57bb930ed39930b1dc8f2a646c146337e069d988351d7283ba50a4a242db234.scope.
Nov 25 09:35:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:19 compute-0 podman[104398]: 2025-11-25 09:35:19.960619173 +0000 UTC m=+0.072054200 container init c57bb930ed39930b1dc8f2a646c146337e069d988351d7283ba50a4a242db234 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wu, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:35:19 compute-0 podman[104398]: 2025-11-25 09:35:19.964986216 +0000 UTC m=+0.076421244 container start c57bb930ed39930b1dc8f2a646c146337e069d988351d7283ba50a4a242db234 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:35:19 compute-0 sleepy_wu[104411]: 167 167
Nov 25 09:35:19 compute-0 systemd[1]: libpod-c57bb930ed39930b1dc8f2a646c146337e069d988351d7283ba50a4a242db234.scope: Deactivated successfully.
Nov 25 09:35:19 compute-0 podman[104398]: 2025-11-25 09:35:19.968995907 +0000 UTC m=+0.080430933 container attach c57bb930ed39930b1dc8f2a646c146337e069d988351d7283ba50a4a242db234 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:35:19 compute-0 podman[104398]: 2025-11-25 09:35:19.969432008 +0000 UTC m=+0.080867035 container died c57bb930ed39930b1dc8f2a646c146337e069d988351d7283ba50a4a242db234 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fe6345ff6e2e5ffffa9d754fe860f064df0f7d03969eaf00fa40c76e6077846-merged.mount: Deactivated successfully.
Nov 25 09:35:19 compute-0 podman[104398]: 2025-11-25 09:35:19.986194252 +0000 UTC m=+0.097629278 container remove c57bb930ed39930b1dc8f2a646c146337e069d988351d7283ba50a4a242db234 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wu, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:35:19 compute-0 podman[104398]: 2025-11-25 09:35:19.904714872 +0000 UTC m=+0.016149919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:19 compute-0 systemd[1]: libpod-conmon-c57bb930ed39930b1dc8f2a646c146337e069d988351d7283ba50a4a242db234.scope: Deactivated successfully.
Nov 25 09:35:20 compute-0 python3.9[104393]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:35:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:20 : epoch 6925783f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:35:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:20 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc005320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:20 compute-0 podman[104439]: 2025-11-25 09:35:20.100920013 +0000 UTC m=+0.027801158 container create c032d520604d6f56041e77081b1c43b0a7c58ddead7aacd5cad5d9ff0ffdbbd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_panini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 25 09:35:20 compute-0 systemd[1]: Started libpod-conmon-c032d520604d6f56041e77081b1c43b0a7c58ddead7aacd5cad5d9ff0ffdbbd8.scope.
Nov 25 09:35:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2a9a0c84f66856c6e4d456e0fbfdb46b11cebf521849cc53993224eca8fd66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2a9a0c84f66856c6e4d456e0fbfdb46b11cebf521849cc53993224eca8fd66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2a9a0c84f66856c6e4d456e0fbfdb46b11cebf521849cc53993224eca8fd66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2a9a0c84f66856c6e4d456e0fbfdb46b11cebf521849cc53993224eca8fd66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:20 compute-0 podman[104439]: 2025-11-25 09:35:20.150108281 +0000 UTC m=+0.076989446 container init c032d520604d6f56041e77081b1c43b0a7c58ddead7aacd5cad5d9ff0ffdbbd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 09:35:20 compute-0 podman[104439]: 2025-11-25 09:35:20.154740474 +0000 UTC m=+0.081621619 container start c032d520604d6f56041e77081b1c43b0a7c58ddead7aacd5cad5d9ff0ffdbbd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_panini, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:35:20 compute-0 podman[104439]: 2025-11-25 09:35:20.155829237 +0000 UTC m=+0.082710382 container attach c032d520604d6f56041e77081b1c43b0a7c58ddead7aacd5cad5d9ff0ffdbbd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_panini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:35:20 compute-0 podman[104439]: 2025-11-25 09:35:20.089317377 +0000 UTC m=+0.016198542 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:35:20] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Nov 25 09:35:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:35:20] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Nov 25 09:35:20 compute-0 stoic_panini[104452]: {
Nov 25 09:35:20 compute-0 stoic_panini[104452]:     "1": [
Nov 25 09:35:20 compute-0 stoic_panini[104452]:         {
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             "devices": [
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "/dev/loop3"
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             ],
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             "lv_name": "ceph_lv0",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             "lv_size": "21470642176",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             "name": "ceph_lv0",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             "tags": {
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.cluster_name": "ceph",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.crush_device_class": "",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.encrypted": "0",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.osd_id": "1",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.type": "block",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.vdo": "0",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:                 "ceph.with_tpm": "0"
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             },
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             "type": "block",
Nov 25 09:35:20 compute-0 stoic_panini[104452]:             "vg_name": "ceph_vg0"
Nov 25 09:35:20 compute-0 stoic_panini[104452]:         }
Nov 25 09:35:20 compute-0 stoic_panini[104452]:     ]
Nov 25 09:35:20 compute-0 stoic_panini[104452]: }
Nov 25 09:35:20 compute-0 systemd[1]: libpod-c032d520604d6f56041e77081b1c43b0a7c58ddead7aacd5cad5d9ff0ffdbbd8.scope: Deactivated successfully.
Nov 25 09:35:20 compute-0 podman[104439]: 2025-11-25 09:35:20.390118079 +0000 UTC m=+0.316999234 container died c032d520604d6f56041e77081b1c43b0a7c58ddead7aacd5cad5d9ff0ffdbbd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_panini, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:35:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa2a9a0c84f66856c6e4d456e0fbfdb46b11cebf521849cc53993224eca8fd66-merged.mount: Deactivated successfully.
Nov 25 09:35:20 compute-0 podman[104439]: 2025-11-25 09:35:20.412655801 +0000 UTC m=+0.339536946 container remove c032d520604d6f56041e77081b1c43b0a7c58ddead7aacd5cad5d9ff0ffdbbd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_panini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 09:35:20 compute-0 systemd[1]: libpod-conmon-c032d520604d6f56041e77081b1c43b0a7c58ddead7aacd5cad5d9ff0ffdbbd8.scope: Deactivated successfully.
Nov 25 09:35:20 compute-0 sudo[104265]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:20.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:20 compute-0 sudo[104471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:20 compute-0 sudo[104471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:20 compute-0 sudo[104471]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:20 compute-0 sudo[104496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:35:20 compute-0 sudo[104496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:20 compute-0 ceph-mon[74207]: pgmap v21: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:35:20 compute-0 podman[104555]: 2025-11-25 09:35:20.817109787 +0000 UTC m=+0.027949146 container create d2560e277b64762f002d579fe556441501a7c057784b2a0c7b3daad3ef414fb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcclintock, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:35:20 compute-0 systemd[1]: Started libpod-conmon-d2560e277b64762f002d579fe556441501a7c057784b2a0c7b3daad3ef414fb3.scope.
Nov 25 09:35:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:20 compute-0 podman[104555]: 2025-11-25 09:35:20.866796906 +0000 UTC m=+0.077636274 container init d2560e277b64762f002d579fe556441501a7c057784b2a0c7b3daad3ef414fb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcclintock, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:35:20 compute-0 podman[104555]: 2025-11-25 09:35:20.870939858 +0000 UTC m=+0.081779216 container start d2560e277b64762f002d579fe556441501a7c057784b2a0c7b3daad3ef414fb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 25 09:35:20 compute-0 podman[104555]: 2025-11-25 09:35:20.872948664 +0000 UTC m=+0.083788022 container attach d2560e277b64762f002d579fe556441501a7c057784b2a0c7b3daad3ef414fb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:35:20 compute-0 inspiring_mcclintock[104568]: 167 167
Nov 25 09:35:20 compute-0 systemd[1]: libpod-d2560e277b64762f002d579fe556441501a7c057784b2a0c7b3daad3ef414fb3.scope: Deactivated successfully.
Nov 25 09:35:20 compute-0 podman[104555]: 2025-11-25 09:35:20.875489867 +0000 UTC m=+0.086329224 container died d2560e277b64762f002d579fe556441501a7c057784b2a0c7b3daad3ef414fb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcclintock, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:35:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea2f1bc034652b22feece7e85f8ef2930315f0d12055be5487b5eea4397b363e-merged.mount: Deactivated successfully.
Nov 25 09:35:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:20 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000b860 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:20 compute-0 podman[104555]: 2025-11-25 09:35:20.896397074 +0000 UTC m=+0.107236432 container remove d2560e277b64762f002d579fe556441501a7c057784b2a0c7b3daad3ef414fb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcclintock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:35:20 compute-0 podman[104555]: 2025-11-25 09:35:20.805997987 +0000 UTC m=+0.016837355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:20 compute-0 systemd[1]: libpod-conmon-d2560e277b64762f002d579fe556441501a7c057784b2a0c7b3daad3ef414fb3.scope: Deactivated successfully.
Nov 25 09:35:21 compute-0 podman[104589]: 2025-11-25 09:35:21.011142091 +0000 UTC m=+0.029221575 container create aca3bdafb58478e6d35ee32fdf3c6fa91c3304c69b502ad13edee9df688f2458 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_almeida, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:35:21 compute-0 systemd[1]: Started libpod-conmon-aca3bdafb58478e6d35ee32fdf3c6fa91c3304c69b502ad13edee9df688f2458.scope.
Nov 25 09:35:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6405295fb3716e7723e3299e3b6a91965e12667f8014bd94d89310d456e96395/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6405295fb3716e7723e3299e3b6a91965e12667f8014bd94d89310d456e96395/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6405295fb3716e7723e3299e3b6a91965e12667f8014bd94d89310d456e96395/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6405295fb3716e7723e3299e3b6a91965e12667f8014bd94d89310d456e96395/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:21 compute-0 podman[104589]: 2025-11-25 09:35:21.069363019 +0000 UTC m=+0.087442503 container init aca3bdafb58478e6d35ee32fdf3c6fa91c3304c69b502ad13edee9df688f2458 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_almeida, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:35:21 compute-0 podman[104589]: 2025-11-25 09:35:21.074720199 +0000 UTC m=+0.092799684 container start aca3bdafb58478e6d35ee32fdf3c6fa91c3304c69b502ad13edee9df688f2458 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_almeida, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:35:21 compute-0 podman[104589]: 2025-11-25 09:35:21.079053038 +0000 UTC m=+0.097132522 container attach aca3bdafb58478e6d35ee32fdf3c6fa91c3304c69b502ad13edee9df688f2458 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 25 09:35:21 compute-0 podman[104589]: 2025-11-25 09:35:20.998727095 +0000 UTC m=+0.016806598 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:21.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v22: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:35:21 compute-0 admiring_almeida[104602]: {}
Nov 25 09:35:21 compute-0 lvm[104680]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:35:21 compute-0 lvm[104680]: VG ceph_vg0 finished
Nov 25 09:35:21 compute-0 podman[104589]: 2025-11-25 09:35:21.577339034 +0000 UTC m=+0.595418517 container died aca3bdafb58478e6d35ee32fdf3c6fa91c3304c69b502ad13edee9df688f2458 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 25 09:35:21 compute-0 systemd[1]: libpod-aca3bdafb58478e6d35ee32fdf3c6fa91c3304c69b502ad13edee9df688f2458.scope: Deactivated successfully.
Nov 25 09:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6405295fb3716e7723e3299e3b6a91965e12667f8014bd94d89310d456e96395-merged.mount: Deactivated successfully.
Nov 25 09:35:21 compute-0 podman[104589]: 2025-11-25 09:35:21.599567011 +0000 UTC m=+0.617646496 container remove aca3bdafb58478e6d35ee32fdf3c6fa91c3304c69b502ad13edee9df688f2458 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:35:21 compute-0 systemd[1]: libpod-conmon-aca3bdafb58478e6d35ee32fdf3c6fa91c3304c69b502ad13edee9df688f2458.scope: Deactivated successfully.
Nov 25 09:35:21 compute-0 sudo[104496]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:35:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:35:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:21 compute-0 sudo[104693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:35:21 compute-0 sudo[104693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:21 compute-0 sudo[104693]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:21 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000b860 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:21 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Nov 25 09:35:21 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Nov 25 09:35:21 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Nov 25 09:35:21 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Nov 25 09:35:21 compute-0 sudo[104719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:21 compute-0 sudo[104719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:21 compute-0 sudo[104719]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:21 compute-0 sudo[104744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:35:21 compute-0 sudo[104744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:22 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc005320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:22 compute-0 systemd[1]: Stopping Ceph node-exporter.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:22 compute-0 podman[104809]: 2025-11-25 09:35:22.230341897 +0000 UTC m=+0.038752374 container died dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e63e7bfdb75029d94653e54e41462db524d74b47fe824701e94ac56e7401648-merged.mount: Deactivated successfully.
Nov 25 09:35:22 compute-0 podman[104809]: 2025-11-25 09:35:22.250372523 +0000 UTC m=+0.058782969 container remove dbe7cf1e95354dccf3a167c04d98bdf6a61559ef93ad5a0125c97e6f3960ad15 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:22 compute-0 bash[104809]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0
Nov 25 09:35:22 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Nov 25 09:35:22 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@node-exporter.compute-0.service: Failed with result 'exit-code'.
Nov 25 09:35:22 compute-0 systemd[1]: Stopped Ceph node-exporter.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:22 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@node-exporter.compute-0.service: Consumed 1.466s CPU time.
Nov 25 09:35:22 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:22.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:22 compute-0 podman[104890]: 2025-11-25 09:35:22.514856905 +0000 UTC m=+0.027944306 container create e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa02ef8e504b05f5d09a910c9fff5fdb564e88f2f75cd86df53d856f2f5daacf/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:22 compute-0 podman[104890]: 2025-11-25 09:35:22.554609283 +0000 UTC m=+0.067696684 container init e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:22 compute-0 podman[104890]: 2025-11-25 09:35:22.558842123 +0000 UTC m=+0.071929525 container start e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:22 compute-0 bash[104890]: e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5
Nov 25 09:35:22 compute-0 podman[104890]: 2025-11-25 09:35:22.502940969 +0000 UTC m=+0.016028391 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.563Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.563Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.564Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.564Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=arp
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=bcache
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=bonding
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=cpu
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=dmi
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=edac
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=entropy
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=filefd
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=netclass
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=netdev
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=netstat
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=nfs
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=nvme
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=os
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=pressure
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=rapl
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.565Z caller=node_exporter.go:117 level=info collector=selinux
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=node_exporter.go:117 level=info collector=softnet
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=node_exporter.go:117 level=info collector=stat
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=node_exporter.go:117 level=info collector=textfile
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=node_exporter.go:117 level=info collector=time
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=node_exporter.go:117 level=info collector=uname
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=node_exporter.go:117 level=info collector=xfs
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=node_exporter.go:117 level=info collector=zfs
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0[104902]: ts=2025-11-25T09:35:22.566Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 25 09:35:22 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:22 compute-0 sudo[104744]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:35:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:35:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:22 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 25 09:35:22 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 25 09:35:22 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 25 09:35:22 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 25 09:35:22 compute-0 ceph-mon[74207]: pgmap v22: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:35:22 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:22 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:22 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:22 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.646578) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322646612, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1602, "num_deletes": 251, "total_data_size": 4414791, "memory_usage": 4643168, "flush_reason": "Manual Compaction"}
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322654220, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 4023900, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 5840, "largest_seqno": 7441, "table_properties": {"data_size": 4016908, "index_size": 3742, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17514, "raw_average_key_size": 20, "raw_value_size": 4001389, "raw_average_value_size": 4690, "num_data_blocks": 170, "num_entries": 853, "num_filter_entries": 853, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063277, "oldest_key_time": 1764063277, "file_creation_time": 1764063322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7670 microseconds, and 6049 cpu microseconds.
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.654246) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 4023900 bytes OK
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.654258) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.654828) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.654839) EVENT_LOG_v1 {"time_micros": 1764063322654836, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.654851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 4407214, prev total WAL file size 4407214, number of live WAL files 2.
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.655500) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3929KB)], [20(9814KB)]
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322655521, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 14073663, "oldest_snapshot_seqno": -1}
Nov 25 09:35:22 compute-0 sudo[104911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:22 compute-0 sudo[104911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:22 compute-0 sudo[104911]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2631 keys, 12687888 bytes, temperature: kUnknown
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322685558, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 12687888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12666282, "index_size": 13945, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6597, "raw_key_size": 66501, "raw_average_key_size": 25, "raw_value_size": 12613586, "raw_average_value_size": 4794, "num_data_blocks": 618, "num_entries": 2631, "num_filter_entries": 2631, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764063322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.685711) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12687888 bytes
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.686077) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 467.6 rd, 421.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 9.6 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(6.7) write-amplify(3.2) OK, records in: 3169, records dropped: 538 output_compression: NoCompression
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.686090) EVENT_LOG_v1 {"time_micros": 1764063322686085, "job": 6, "event": "compaction_finished", "compaction_time_micros": 30095, "compaction_time_cpu_micros": 17818, "output_level": 6, "num_output_files": 1, "total_output_size": 12687888, "num_input_records": 3169, "num_output_records": 2631, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322686652, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322687767, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.655475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.687809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.687812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.687813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.687814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:35:22 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:35:22.687815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:35:22 compute-0 sudo[104936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:35:22 compute-0 sudo[104936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:22 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc005320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:22 compute-0 podman[104975]: 2025-11-25 09:35:22.953200145 +0000 UTC m=+0.027946310 volume create ee75e7cb3179a326a59dec0ffd0c91a2c8efdd9e71750d3bcbf9ba494ccf45f4
Nov 25 09:35:22 compute-0 podman[104975]: 2025-11-25 09:35:22.957983213 +0000 UTC m=+0.032729377 container create 085b5ac98915f11db5a43fe5c92294429fa1178a8667088d65289088c3a5cfe1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=wizardly_goldstine, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:22 compute-0 systemd[1]: Started libpod-conmon-085b5ac98915f11db5a43fe5c92294429fa1178a8667088d65289088c3a5cfe1.scope.
Nov 25 09:35:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f15ffea397593fc7751dde371d8cd2fefc61129788b65ff927952f562afe116/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:23 compute-0 podman[104975]: 2025-11-25 09:35:23.014291415 +0000 UTC m=+0.089037600 container init 085b5ac98915f11db5a43fe5c92294429fa1178a8667088d65289088c3a5cfe1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=wizardly_goldstine, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 podman[104975]: 2025-11-25 09:35:23.019079894 +0000 UTC m=+0.093826057 container start 085b5ac98915f11db5a43fe5c92294429fa1178a8667088d65289088c3a5cfe1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=wizardly_goldstine, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 podman[104975]: 2025-11-25 09:35:23.020175961 +0000 UTC m=+0.094922124 container attach 085b5ac98915f11db5a43fe5c92294429fa1178a8667088d65289088c3a5cfe1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=wizardly_goldstine, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 wizardly_goldstine[104992]: 65534 65534
Nov 25 09:35:23 compute-0 systemd[1]: libpod-085b5ac98915f11db5a43fe5c92294429fa1178a8667088d65289088c3a5cfe1.scope: Deactivated successfully.
Nov 25 09:35:23 compute-0 podman[104975]: 2025-11-25 09:35:23.022288102 +0000 UTC m=+0.097034256 container died 085b5ac98915f11db5a43fe5c92294429fa1178a8667088d65289088c3a5cfe1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=wizardly_goldstine, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f15ffea397593fc7751dde371d8cd2fefc61129788b65ff927952f562afe116-merged.mount: Deactivated successfully.
Nov 25 09:35:23 compute-0 podman[104975]: 2025-11-25 09:35:22.941821082 +0000 UTC m=+0.016567266 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 25 09:35:23 compute-0 podman[104975]: 2025-11-25 09:35:23.04424551 +0000 UTC m=+0.118991674 container remove 085b5ac98915f11db5a43fe5c92294429fa1178a8667088d65289088c3a5cfe1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=wizardly_goldstine, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 podman[104975]: 2025-11-25 09:35:23.046270979 +0000 UTC m=+0.121017143 volume remove ee75e7cb3179a326a59dec0ffd0c91a2c8efdd9e71750d3bcbf9ba494ccf45f4
Nov 25 09:35:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:23 : epoch 6925783f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:35:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:23 : epoch 6925783f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:35:23 compute-0 systemd[1]: libpod-conmon-085b5ac98915f11db5a43fe5c92294429fa1178a8667088d65289088c3a5cfe1.scope: Deactivated successfully.
Nov 25 09:35:23 compute-0 podman[105007]: 2025-11-25 09:35:23.090762362 +0000 UTC m=+0.027189523 volume create 72ba51d6adbb3d192bd2fb76da88b6f6c22add3f3bd6fbd455cc01189d66e172
Nov 25 09:35:23 compute-0 podman[105007]: 2025-11-25 09:35:23.096235361 +0000 UTC m=+0.032662521 container create 1fadf03ffec923b62851d9665aba06c7790b95f5830a9419b5878cd659be89ed (image=quay.io/prometheus/alertmanager:v0.25.0, name=epic_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 systemd[1]: Started libpod-conmon-1fadf03ffec923b62851d9665aba06c7790b95f5830a9419b5878cd659be89ed.scope.
Nov 25 09:35:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:23.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v23: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:35:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/991bea5e363593a27e1cd623afb953d1360de6bd4aabcbd36208be20cfb9e0f3/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:23 compute-0 podman[105007]: 2025-11-25 09:35:23.146691589 +0000 UTC m=+0.083118760 container init 1fadf03ffec923b62851d9665aba06c7790b95f5830a9419b5878cd659be89ed (image=quay.io/prometheus/alertmanager:v0.25.0, name=epic_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 podman[105007]: 2025-11-25 09:35:23.150621068 +0000 UTC m=+0.087048239 container start 1fadf03ffec923b62851d9665aba06c7790b95f5830a9419b5878cd659be89ed (image=quay.io/prometheus/alertmanager:v0.25.0, name=epic_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 podman[105007]: 2025-11-25 09:35:23.151925056 +0000 UTC m=+0.088352217 container attach 1fadf03ffec923b62851d9665aba06c7790b95f5830a9419b5878cd659be89ed (image=quay.io/prometheus/alertmanager:v0.25.0, name=epic_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 epic_williams[105021]: 65534 65534
Nov 25 09:35:23 compute-0 systemd[1]: libpod-1fadf03ffec923b62851d9665aba06c7790b95f5830a9419b5878cd659be89ed.scope: Deactivated successfully.
Nov 25 09:35:23 compute-0 conmon[105021]: conmon 1fadf03ffec923b62851 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1fadf03ffec923b62851d9665aba06c7790b95f5830a9419b5878cd659be89ed.scope/container/memory.events
Nov 25 09:35:23 compute-0 podman[105007]: 2025-11-25 09:35:23.153970823 +0000 UTC m=+0.090397994 container died 1fadf03ffec923b62851d9665aba06c7790b95f5830a9419b5878cd659be89ed (image=quay.io/prometheus/alertmanager:v0.25.0, name=epic_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 podman[105007]: 2025-11-25 09:35:23.172647937 +0000 UTC m=+0.109075098 container remove 1fadf03ffec923b62851d9665aba06c7790b95f5830a9419b5878cd659be89ed (image=quay.io/prometheus/alertmanager:v0.25.0, name=epic_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 podman[105007]: 2025-11-25 09:35:23.174323226 +0000 UTC m=+0.110750397 volume remove 72ba51d6adbb3d192bd2fb76da88b6f6c22add3f3bd6fbd455cc01189d66e172
Nov 25 09:35:23 compute-0 podman[105007]: 2025-11-25 09:35:23.081849538 +0000 UTC m=+0.018276719 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 25 09:35:23 compute-0 systemd[1]: libpod-conmon-1fadf03ffec923b62851d9665aba06c7790b95f5830a9419b5878cd659be89ed.scope: Deactivated successfully.
Nov 25 09:35:23 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-991bea5e363593a27e1cd623afb953d1360de6bd4aabcbd36208be20cfb9e0f3-merged.mount: Deactivated successfully.
Nov 25 09:35:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[96479]: ts=2025-11-25T09:35:23.322Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Nov 25 09:35:23 compute-0 podman[105061]: 2025-11-25 09:35:23.33147771 +0000 UTC m=+0.033501484 container died 26e220db1d5c7d27472c73e3f52d829b2b169c850bfd4cac7803406968b3e9da (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3f8578f82279ba71da6754a8d52d54e64e3669b64604653e0209b38374ffd41-merged.mount: Deactivated successfully.
Nov 25 09:35:23 compute-0 podman[105061]: 2025-11-25 09:35:23.350016503 +0000 UTC m=+0.052040277 container remove 26e220db1d5c7d27472c73e3f52d829b2b169c850bfd4cac7803406968b3e9da (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 podman[105061]: 2025-11-25 09:35:23.352281543 +0000 UTC m=+0.054305337 volume remove e462e866e02932d54eb2ee75eeae45d16be498a10b71c45c1a27830307cef46b
Nov 25 09:35:23 compute-0 bash[105061]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0
Nov 25 09:35:23 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@alertmanager.compute-0.service: Deactivated successfully.
Nov 25 09:35:23 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:23 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:23 compute-0 podman[105149]: 2025-11-25 09:35:23.617399729 +0000 UTC m=+0.028875492 volume create 582d93b4cdcef360b7b47b7c925d051c04a0b911474f4be7c59c1ce1c19a68ac
Nov 25 09:35:23 compute-0 podman[105149]: 2025-11-25 09:35:23.621299824 +0000 UTC m=+0.032775596 container create 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 ceph-mon[74207]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Nov 25 09:35:23 compute-0 ceph-mon[74207]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Nov 25 09:35:23 compute-0 ceph-mon[74207]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 25 09:35:23 compute-0 ceph-mon[74207]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 25 09:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06a09cf203b44ff0aca0186b1963f44746ed991ab8f90c4493e57d81629089a7/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06a09cf203b44ff0aca0186b1963f44746ed991ab8f90c4493e57d81629089a7/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:23 compute-0 podman[105149]: 2025-11-25 09:35:23.669266808 +0000 UTC m=+0.080742591 container init 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 podman[105149]: 2025-11-25 09:35:23.673290956 +0000 UTC m=+0.084766718 container start 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:23 compute-0 bash[105149]: 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218
Nov 25 09:35:23 compute-0 podman[105149]: 2025-11-25 09:35:23.606790397 +0000 UTC m=+0.018266179 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 25 09:35:23 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:35:23.696Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Nov 25 09:35:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:35:23.696Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Nov 25 09:35:23 compute-0 sudo[104936]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:35:23.703Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.26.109 port=9094
Nov 25 09:35:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:35:23.705Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Nov 25 09:35:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:35:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:35:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:23 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 25 09:35:23 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 25 09:35:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:23 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000c6f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:35:23.743Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 25 09:35:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:35:23.743Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 25 09:35:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:35:23.754Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Nov 25 09:35:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:35:23.754Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Nov 25 09:35:23 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Nov 25 09:35:23 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Nov 25 09:35:23 compute-0 sudo[105186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:23 compute-0 sudo[105186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:23 compute-0 sudo[105186]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:23 compute-0 sudo[105211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 09:35:23 compute-0 sudo[105211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:24 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000c6f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:24 compute-0 podman[105251]: 2025-11-25 09:35:24.162001117 +0000 UTC m=+0.031222417 container create faa415d6bc48f5992be22cac73e978d359f99279666e70b4828300873ea0ea2e (image=quay.io/ceph/grafana:10.4.0, name=hopeful_bhabha, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 systemd[1]: Started libpod-conmon-faa415d6bc48f5992be22cac73e978d359f99279666e70b4828300873ea0ea2e.scope.
Nov 25 09:35:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:24 compute-0 podman[105251]: 2025-11-25 09:35:24.220222176 +0000 UTC m=+0.089443496 container init faa415d6bc48f5992be22cac73e978d359f99279666e70b4828300873ea0ea2e (image=quay.io/ceph/grafana:10.4.0, name=hopeful_bhabha, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 podman[105251]: 2025-11-25 09:35:24.225536596 +0000 UTC m=+0.094757895 container start faa415d6bc48f5992be22cac73e978d359f99279666e70b4828300873ea0ea2e (image=quay.io/ceph/grafana:10.4.0, name=hopeful_bhabha, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 hopeful_bhabha[105264]: 472 0
Nov 25 09:35:24 compute-0 podman[105251]: 2025-11-25 09:35:24.227540734 +0000 UTC m=+0.096762033 container attach faa415d6bc48f5992be22cac73e978d359f99279666e70b4828300873ea0ea2e (image=quay.io/ceph/grafana:10.4.0, name=hopeful_bhabha, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 systemd[1]: libpod-faa415d6bc48f5992be22cac73e978d359f99279666e70b4828300873ea0ea2e.scope: Deactivated successfully.
Nov 25 09:35:24 compute-0 conmon[105264]: conmon faa415d6bc48f5992be2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-faa415d6bc48f5992be22cac73e978d359f99279666e70b4828300873ea0ea2e.scope/container/memory.events
Nov 25 09:35:24 compute-0 podman[105251]: 2025-11-25 09:35:24.229186296 +0000 UTC m=+0.098407596 container died faa415d6bc48f5992be22cac73e978d359f99279666e70b4828300873ea0ea2e (image=quay.io/ceph/grafana:10.4.0, name=hopeful_bhabha, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 podman[105251]: 2025-11-25 09:35:24.149385101 +0000 UTC m=+0.018606411 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 25 09:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-48521971f51d09536fb3bd637121371cbd52215e7f0f325df65559ec2724bf6c-merged.mount: Deactivated successfully.
Nov 25 09:35:24 compute-0 podman[105251]: 2025-11-25 09:35:24.255748736 +0000 UTC m=+0.124970037 container remove faa415d6bc48f5992be22cac73e978d359f99279666e70b4828300873ea0ea2e (image=quay.io/ceph/grafana:10.4.0, name=hopeful_bhabha, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 systemd[1]: libpod-conmon-faa415d6bc48f5992be22cac73e978d359f99279666e70b4828300873ea0ea2e.scope: Deactivated successfully.
Nov 25 09:35:24 compute-0 podman[105279]: 2025-11-25 09:35:24.304741716 +0000 UTC m=+0.030696034 container create ca91ce5c734f18e85fe68592bcad0a5ed6faa99fcd0637a681358552c405ff55 (image=quay.io/ceph/grafana:10.4.0, name=naughty_rubin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 systemd[1]: Started libpod-conmon-ca91ce5c734f18e85fe68592bcad0a5ed6faa99fcd0637a681358552c405ff55.scope.
Nov 25 09:35:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:24 compute-0 podman[105279]: 2025-11-25 09:35:24.363497343 +0000 UTC m=+0.089451681 container init ca91ce5c734f18e85fe68592bcad0a5ed6faa99fcd0637a681358552c405ff55 (image=quay.io/ceph/grafana:10.4.0, name=naughty_rubin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 podman[105279]: 2025-11-25 09:35:24.367503546 +0000 UTC m=+0.093457864 container start ca91ce5c734f18e85fe68592bcad0a5ed6faa99fcd0637a681358552c405ff55 (image=quay.io/ceph/grafana:10.4.0, name=naughty_rubin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 podman[105279]: 2025-11-25 09:35:24.368835437 +0000 UTC m=+0.094789755 container attach ca91ce5c734f18e85fe68592bcad0a5ed6faa99fcd0637a681358552c405ff55 (image=quay.io/ceph/grafana:10.4.0, name=naughty_rubin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 naughty_rubin[105292]: 472 0
Nov 25 09:35:24 compute-0 systemd[1]: libpod-ca91ce5c734f18e85fe68592bcad0a5ed6faa99fcd0637a681358552c405ff55.scope: Deactivated successfully.
Nov 25 09:35:24 compute-0 podman[105279]: 2025-11-25 09:35:24.370319274 +0000 UTC m=+0.096273593 container died ca91ce5c734f18e85fe68592bcad0a5ed6faa99fcd0637a681358552c405ff55 (image=quay.io/ceph/grafana:10.4.0, name=naughty_rubin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba7b96b9a723e7bf6697c0314d0a7fa4961298af731b3e91b996ba645ee1b13f-merged.mount: Deactivated successfully.
Nov 25 09:35:24 compute-0 podman[105279]: 2025-11-25 09:35:24.291342734 +0000 UTC m=+0.017297073 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 25 09:35:24 compute-0 podman[105279]: 2025-11-25 09:35:24.39231773 +0000 UTC m=+0.118272037 container remove ca91ce5c734f18e85fe68592bcad0a5ed6faa99fcd0637a681358552c405ff55 (image=quay.io/ceph/grafana:10.4.0, name=naughty_rubin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 systemd[1]: libpod-conmon-ca91ce5c734f18e85fe68592bcad0a5ed6faa99fcd0637a681358552c405ff55.scope: Deactivated successfully.
Nov 25 09:35:24 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:24.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=server t=2025-11-25T09:35:24.551814421Z level=info msg="Shutdown started" reason="System signal: terminated"
Nov 25 09:35:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=tracing t=2025-11-25T09:35:24.552276121Z level=info msg="Closing tracing"
Nov 25 09:35:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=ticker t=2025-11-25T09:35:24.552303083Z level=info msg=stopped last_tick=2025-11-25T09:35:20Z
Nov 25 09:35:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[97224]: logger=grafana-apiserver t=2025-11-25T09:35:24.552340693Z level=info msg="StorageObjectCountTracker pruner is exiting"
Nov 25 09:35:24 compute-0 podman[105333]: 2025-11-25 09:35:24.564077111 +0000 UTC m=+0.036271317 container died e68646e3fd07566db62d42edd0b076924b9245803f1164555eac0bfb296d8565 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-dde3a6c663f06f99580f42e40c265979e194f152a4b24a2a4c7be1556b51ff4e-merged.mount: Deactivated successfully.
Nov 25 09:35:24 compute-0 podman[105333]: 2025-11-25 09:35:24.591552272 +0000 UTC m=+0.063746468 container remove e68646e3fd07566db62d42edd0b076924b9245803f1164555eac0bfb296d8565 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 bash[105333]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0
Nov 25 09:35:24 compute-0 ceph-mon[74207]: pgmap v23: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:35:24 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:24 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:24 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@grafana.compute-0.service: Deactivated successfully.
Nov 25 09:35:24 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:24 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@grafana.compute-0.service: Consumed 3.174s CPU time.
Nov 25 09:35:24 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:35:24 compute-0 podman[105415]: 2025-11-25 09:35:24.835035558 +0000 UTC m=+0.031753087 container create c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b0067dd5630888c74776313f9e52560092e1499e972e214e147737464bf623/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b0067dd5630888c74776313f9e52560092e1499e972e214e147737464bf623/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b0067dd5630888c74776313f9e52560092e1499e972e214e147737464bf623/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b0067dd5630888c74776313f9e52560092e1499e972e214e147737464bf623/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b0067dd5630888c74776313f9e52560092e1499e972e214e147737464bf623/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:24 compute-0 podman[105415]: 2025-11-25 09:35:24.874434821 +0000 UTC m=+0.071152370 container init c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 podman[105415]: 2025-11-25 09:35:24.884689625 +0000 UTC m=+0.081407153 container start c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:24 compute-0 bash[105415]: c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907
Nov 25 09:35:24 compute-0 podman[105415]: 2025-11-25 09:35:24.821317965 +0000 UTC m=+0.018035514 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 25 09:35:24 compute-0 systemd[1]: Started Ceph grafana.compute-0 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:35:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:24 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc006810 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:24 compute-0 sudo[105211]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:35:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:35:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:24 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-1 (unknown last config time)...
Nov 25 09:35:24 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-1 (unknown last config time)...
Nov 25 09:35:24 compute-0 ceph-mgr[74476]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-1 on compute-1
Nov 25 09:35:24 compute-0 ceph-mgr[74476]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-1 on compute-1
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.026702602Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-11-25T09:35:25Z
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027339914Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027356495Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027361404Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027381322Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027384688Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027387975Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027601617Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027606286Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027610073Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027613259Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027617477Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027621234Z level=info msg=Target target=[all]
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027628387Z level=info msg="Path Home" path=/usr/share/grafana
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027884591Z level=info msg="Path Data" path=/var/lib/grafana
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027904057Z level=info msg="Path Logs" path=/var/log/grafana
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027909097Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.027912664Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=settings t=2025-11-25T09:35:25.028088926Z level=info msg="App mode production"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=sqlstore t=2025-11-25T09:35:25.028420732Z level=info msg="Connecting to DB" dbtype=sqlite3
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=sqlstore t=2025-11-25T09:35:25.028442122Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=migrator t=2025-11-25T09:35:25.029151018Z level=info msg="Starting DB migrations"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=migrator t=2025-11-25T09:35:25.042711464Z level=info msg="migrations completed" performed=0 skipped=547 duration=577.639µs
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=sqlstore t=2025-11-25T09:35:25.043675262Z level=info msg="Created default organization"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=secrets t=2025-11-25T09:35:25.044326099Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=plugin.store t=2025-11-25T09:35:25.058572349Z level=info msg="Loading plugins..."
Nov 25 09:35:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v24: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:35:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:35:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:25.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=local.finder t=2025-11-25T09:35:25.119541558Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=plugin.store t=2025-11-25T09:35:25.119574951Z level=info msg="Plugins loaded" count=55 duration=61.003123ms
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=query_data t=2025-11-25T09:35:25.125491456Z level=info msg="Query Service initialization"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=live.push_http t=2025-11-25T09:35:25.13000158Z level=info msg="Live Push Gateway initialization"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=ngalert.migration t=2025-11-25T09:35:25.131691676Z level=info msg=Starting
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=ngalert.state.manager t=2025-11-25T09:35:25.138825146Z level=info msg="Running in alternative execution of Error/NoData mode"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=infra.usagestats.collector t=2025-11-25T09:35:25.140352114Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=provisioning.datasources t=2025-11-25T09:35:25.142236587Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=provisioning.alerting t=2025-11-25T09:35:25.159243151Z level=info msg="starting to provision alerting"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=provisioning.alerting t=2025-11-25T09:35:25.159260344Z level=info msg="finished to provision alerting"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=grafanaStorageLogger t=2025-11-25T09:35:25.159546703Z level=info msg="Storage starting"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=http.server t=2025-11-25T09:35:25.161075465Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=http.server t=2025-11-25T09:35:25.167090285Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=ngalert.state.manager t=2025-11-25T09:35:25.167781008Z level=info msg="Warming state cache for startup"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=ngalert.multiorg.alertmanager t=2025-11-25T09:35:25.177925875Z level=info msg="Starting MultiOrg Alertmanager"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=provisioning.dashboard t=2025-11-25T09:35:25.186708092Z level=info msg="starting to provision dashboards"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=ngalert.state.manager t=2025-11-25T09:35:25.187016714Z level=info msg="State cache has been initialized" states=0 duration=19.234363ms
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=ngalert.scheduler t=2025-11-25T09:35:25.187042523Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=ticker t=2025-11-25T09:35:25.187080434Z level=info msg=starting first_tick=2025-11-25T09:35:30Z
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=provisioning.dashboard t=2025-11-25T09:35:25.21193763Z level=info msg="finished to provision dashboards"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=grafana.update.checker t=2025-11-25T09:35:25.226913855Z level=info msg="Update check succeeded" duration=51.582553ms
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=plugins.update.checker t=2025-11-25T09:35:25.238021437Z level=info msg="Update check succeeded" duration=61.991798ms
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=grafana-apiserver t=2025-11-25T09:35:25.348707401Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=grafana-apiserver t=2025-11-25T09:35:25.34929542Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Nov 25 09:35:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:35:25 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:35:25 compute-0 ceph-mon[74207]: Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 25 09:35:25 compute-0 ceph-mon[74207]: Reconfiguring daemon grafana.compute-0 on compute-0
Nov 25 09:35:25 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:25 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:25 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Nov 25 09:35:25 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 25 09:35:25 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 25 09:35:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Nov 25 09:35:25 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 25 09:35:25 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 25 09:35:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Nov 25 09:35:25 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 25 09:35:25 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 25 09:35:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Nov 25 09:35:25 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:25 compute-0 ceph-mgr[74476]: [prometheus INFO root] Restarting engine...
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: [25/Nov/2025:09:35:25] ENGINE Bus STOPPING
Nov 25 09:35:25 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:35:25] ENGINE Bus STOPPING
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:35:25.705Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000141224s
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:25 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc006810 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: [25/Nov/2025:09:35:25] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Nov 25 09:35:25 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:35:25] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: [25/Nov/2025:09:35:25] ENGINE Bus STOPPED
Nov 25 09:35:25 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:35:25] ENGINE Bus STOPPED
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: [25/Nov/2025:09:35:25] ENGINE Bus STARTING
Nov 25 09:35:25 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:35:25] ENGINE Bus STARTING
Nov 25 09:35:25 compute-0 sudo[105455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:25 compute-0 sudo[105455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:25 compute-0 sudo[105455]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:25 compute-0 sudo[105491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:35:25 compute-0 sudo[105491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: [25/Nov/2025:09:35:25] ENGINE Serving on http://:::9283
Nov 25 09:35:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: [25/Nov/2025:09:35:25] ENGINE Bus STARTED
Nov 25 09:35:25 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:35:25] ENGINE Serving on http://:::9283
Nov 25 09:35:25 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:35:25] ENGINE Bus STARTED
Nov 25 09:35:25 compute-0 ceph-mgr[74476]: [prometheus INFO root] Engine started.
Nov 25 09:35:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:26 : epoch 6925783f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:35:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:26 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000c6f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:26 compute-0 sudo[104384]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:26 compute-0 podman[105586]: 2025-11-25 09:35:26.244224105 +0000 UTC m=+0.046452030 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:35:26 compute-0 podman[105586]: 2025-11-25 09:35:26.324203666 +0000 UTC m=+0.126431571 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:35:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:26.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:26 compute-0 sshd-session[103608]: Connection closed by 192.168.122.30 port 60592
Nov 25 09:35:26 compute-0 sshd-session[103586]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:35:26 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 25 09:35:26 compute-0 systemd[1]: session-38.scope: Consumed 6.524s CPU time.
Nov 25 09:35:26 compute-0 systemd-logind[744]: Session 38 logged out. Waiting for processes to exit.
Nov 25 09:35:26 compute-0 systemd-logind[744]: Removed session 38.
Nov 25 09:35:26 compute-0 ceph-mon[74207]: Reconfiguring node-exporter.compute-1 (unknown last config time)...
Nov 25 09:35:26 compute-0 ceph-mon[74207]: Reconfiguring daemon node-exporter.compute-1 on compute-1
Nov 25 09:35:26 compute-0 ceph-mon[74207]: pgmap v24: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:35:26 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:26 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:26 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 25 09:35:26 compute-0 ceph-mon[74207]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 25 09:35:26 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 25 09:35:26 compute-0 ceph-mon[74207]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 25 09:35:26 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 25 09:35:26 compute-0 ceph-mon[74207]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 25 09:35:26 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:26 compute-0 podman[105707]: 2025-11-25 09:35:26.672673512 +0000 UTC m=+0.036610094 container exec e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:26 compute-0 podman[105707]: 2025-11-25 09:35:26.683095231 +0000 UTC m=+0.047031814 container exec_died e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:26 compute-0 podman[105777]: 2025-11-25 09:35:26.871904758 +0000 UTC m=+0.035202271 container exec 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:26 compute-0 podman[105777]: 2025-11-25 09:35:26.894064107 +0000 UTC m=+0.057361590 container exec_died 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:26 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000c6f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:27 compute-0 podman[105835]: 2025-11-25 09:35:27.032225011 +0000 UTC m=+0.034474407 container exec c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v25: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:35:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:27.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:27 compute-0 podman[105835]: 2025-11-25 09:35:27.158185013 +0000 UTC m=+0.160434410 container exec_died c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:35:27 compute-0 podman[105892]: 2025-11-25 09:35:27.303977256 +0000 UTC m=+0.035466820 container exec e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:35:27 compute-0 podman[105892]: 2025-11-25 09:35:27.312044876 +0000 UTC m=+0.043534430 container exec_died e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:27 compute-0 podman[105945]: 2025-11-25 09:35:27.446446353 +0000 UTC m=+0.035000420 container exec 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2023-02-22T09:23:20, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, description=keepalived for Ceph)
Nov 25 09:35:27 compute-0 podman[105945]: 2025-11-25 09:35:27.458106106 +0000 UTC m=+0.046660173 container exec_died 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git)
Nov 25 09:35:27 compute-0 podman[105996]: 2025-11-25 09:35:27.593965562 +0000 UTC m=+0.034360543 container exec 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:27 compute-0 podman[105996]: 2025-11-25 09:35:27.618077762 +0000 UTC m=+0.058472743 container exec_died 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:27 compute-0 podman[106046]: 2025-11-25 09:35:27.722913297 +0000 UTC m=+0.034335255 container exec f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:27 compute-0 podman[106046]: 2025-11-25 09:35:27.733086467 +0000 UTC m=+0.044508426 container exec_died f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:35:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:27 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc006810 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:27 compute-0 sudo[105491]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:35:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:35:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:27 compute-0 sudo[106102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:27 compute-0 sudo[106102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:27 compute-0 sudo[106102]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:27 compute-0 sudo[106127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:35:27 compute-0 sudo[106127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:28 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc006810 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-0 ceph-mon[74207]: pgmap v25: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:35:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:35:28 compute-0 podman[106183]: 2025-11-25 09:35:28.26983054 +0000 UTC m=+0.029075019 container create 65d78a53ba162b1ec8447ed16b40c2a4571306625aced5535416d34e95a4d03b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_feynman, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:35:28 compute-0 systemd[1]: Started libpod-conmon-65d78a53ba162b1ec8447ed16b40c2a4571306625aced5535416d34e95a4d03b.scope.
Nov 25 09:35:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:28 compute-0 podman[106183]: 2025-11-25 09:35:28.317510054 +0000 UTC m=+0.076754522 container init 65d78a53ba162b1ec8447ed16b40c2a4571306625aced5535416d34e95a4d03b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:35:28 compute-0 podman[106183]: 2025-11-25 09:35:28.322280597 +0000 UTC m=+0.081525076 container start 65d78a53ba162b1ec8447ed16b40c2a4571306625aced5535416d34e95a4d03b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_feynman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:35:28 compute-0 podman[106183]: 2025-11-25 09:35:28.323461764 +0000 UTC m=+0.082706244 container attach 65d78a53ba162b1ec8447ed16b40c2a4571306625aced5535416d34e95a4d03b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_feynman, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:35:28 compute-0 competent_feynman[106196]: 167 167
Nov 25 09:35:28 compute-0 systemd[1]: libpod-65d78a53ba162b1ec8447ed16b40c2a4571306625aced5535416d34e95a4d03b.scope: Deactivated successfully.
Nov 25 09:35:28 compute-0 podman[106183]: 2025-11-25 09:35:28.32569237 +0000 UTC m=+0.084936849 container died 65d78a53ba162b1ec8447ed16b40c2a4571306625aced5535416d34e95a4d03b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ed3845c4035e2de508c65a6e587d49e710d08fe88871da858d5022a44e7369a-merged.mount: Deactivated successfully.
Nov 25 09:35:28 compute-0 podman[106183]: 2025-11-25 09:35:28.344820093 +0000 UTC m=+0.104064572 container remove 65d78a53ba162b1ec8447ed16b40c2a4571306625aced5535416d34e95a4d03b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:35:28 compute-0 podman[106183]: 2025-11-25 09:35:28.25693108 +0000 UTC m=+0.016175569 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:28 compute-0 systemd[1]: libpod-conmon-65d78a53ba162b1ec8447ed16b40c2a4571306625aced5535416d34e95a4d03b.scope: Deactivated successfully.
Nov 25 09:35:28 compute-0 podman[106218]: 2025-11-25 09:35:28.454706881 +0000 UTC m=+0.028471360 container create 041823aee109b87d8d26f6c90dd6699f4340336f85f7edb73ec37b930157a5fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:28.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:28 compute-0 systemd[1]: Started libpod-conmon-041823aee109b87d8d26f6c90dd6699f4340336f85f7edb73ec37b930157a5fd.scope.
Nov 25 09:35:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3589129afeb3dbaa5b5247f91622117d94532409bf899a86c8bfb0719124952a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3589129afeb3dbaa5b5247f91622117d94532409bf899a86c8bfb0719124952a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3589129afeb3dbaa5b5247f91622117d94532409bf899a86c8bfb0719124952a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3589129afeb3dbaa5b5247f91622117d94532409bf899a86c8bfb0719124952a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3589129afeb3dbaa5b5247f91622117d94532409bf899a86c8bfb0719124952a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:28 compute-0 podman[106218]: 2025-11-25 09:35:28.517981687 +0000 UTC m=+0.091746188 container init 041823aee109b87d8d26f6c90dd6699f4340336f85f7edb73ec37b930157a5fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:35:28 compute-0 podman[106218]: 2025-11-25 09:35:28.524030622 +0000 UTC m=+0.097795101 container start 041823aee109b87d8d26f6c90dd6699f4340336f85f7edb73ec37b930157a5fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_gates, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:35:28 compute-0 podman[106218]: 2025-11-25 09:35:28.525101732 +0000 UTC m=+0.098866212 container attach 041823aee109b87d8d26f6c90dd6699f4340336f85f7edb73ec37b930157a5fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:28 compute-0 podman[106218]: 2025-11-25 09:35:28.443270228 +0000 UTC m=+0.017034728 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:28 compute-0 boring_gates[106231]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:35:28 compute-0 boring_gates[106231]: --> All data devices are unavailable
Nov 25 09:35:28 compute-0 systemd[1]: libpod-041823aee109b87d8d26f6c90dd6699f4340336f85f7edb73ec37b930157a5fd.scope: Deactivated successfully.
Nov 25 09:35:28 compute-0 podman[106246]: 2025-11-25 09:35:28.810843431 +0000 UTC m=+0.017935396 container died 041823aee109b87d8d26f6c90dd6699f4340336f85f7edb73ec37b930157a5fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_gates, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 09:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3589129afeb3dbaa5b5247f91622117d94532409bf899a86c8bfb0719124952a-merged.mount: Deactivated successfully.
Nov 25 09:35:28 compute-0 podman[106246]: 2025-11-25 09:35:28.829800822 +0000 UTC m=+0.036892768 container remove 041823aee109b87d8d26f6c90dd6699f4340336f85f7edb73ec37b930157a5fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_gates, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:35:28 compute-0 systemd[1]: libpod-conmon-041823aee109b87d8d26f6c90dd6699f4340336f85f7edb73ec37b930157a5fd.scope: Deactivated successfully.
Nov 25 09:35:28 compute-0 sudo[106127]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:28 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:28 compute-0 sudo[106258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:28 compute-0 sudo[106258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:28 compute-0 sudo[106258]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:28 compute-0 sudo[106283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:35:28 compute-0 sudo[106283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v26: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:35:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:29.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:29 compute-0 podman[106339]: 2025-11-25 09:35:29.234087545 +0000 UTC m=+0.027274634 container create 08ff4ab29b222bde670c6b11cbf84a3f73a90faaf16c04f413bd4e8006eded58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_burnell, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:35:29 compute-0 systemd[1]: Started libpod-conmon-08ff4ab29b222bde670c6b11cbf84a3f73a90faaf16c04f413bd4e8006eded58.scope.
Nov 25 09:35:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:29 compute-0 podman[106339]: 2025-11-25 09:35:29.285075115 +0000 UTC m=+0.078262204 container init 08ff4ab29b222bde670c6b11cbf84a3f73a90faaf16c04f413bd4e8006eded58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:35:29 compute-0 podman[106339]: 2025-11-25 09:35:29.289319226 +0000 UTC m=+0.082506317 container start 08ff4ab29b222bde670c6b11cbf84a3f73a90faaf16c04f413bd4e8006eded58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:35:29 compute-0 podman[106339]: 2025-11-25 09:35:29.290571217 +0000 UTC m=+0.083758307 container attach 08ff4ab29b222bde670c6b11cbf84a3f73a90faaf16c04f413bd4e8006eded58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 09:35:29 compute-0 focused_burnell[106352]: 167 167
Nov 25 09:35:29 compute-0 systemd[1]: libpod-08ff4ab29b222bde670c6b11cbf84a3f73a90faaf16c04f413bd4e8006eded58.scope: Deactivated successfully.
Nov 25 09:35:29 compute-0 podman[106339]: 2025-11-25 09:35:29.29275831 +0000 UTC m=+0.085945400 container died 08ff4ab29b222bde670c6b11cbf84a3f73a90faaf16c04f413bd4e8006eded58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-338e928e38549f8c2f0bdf26d12ac2fd8d60a3e960388772c320548ae2f8b540-merged.mount: Deactivated successfully.
Nov 25 09:35:29 compute-0 podman[106339]: 2025-11-25 09:35:29.309305268 +0000 UTC m=+0.102492358 container remove 08ff4ab29b222bde670c6b11cbf84a3f73a90faaf16c04f413bd4e8006eded58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_burnell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:35:29 compute-0 podman[106339]: 2025-11-25 09:35:29.223015319 +0000 UTC m=+0.016202420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:29 compute-0 systemd[1]: libpod-conmon-08ff4ab29b222bde670c6b11cbf84a3f73a90faaf16c04f413bd4e8006eded58.scope: Deactivated successfully.
Nov 25 09:35:29 compute-0 podman[106373]: 2025-11-25 09:35:29.426782697 +0000 UTC m=+0.028912191 container create 2f6900d515f9a51de665ea9036310e8453f64b2d49d6a4b3299024a8d595fc62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 25 09:35:29 compute-0 systemd[1]: Started libpod-conmon-2f6900d515f9a51de665ea9036310e8453f64b2d49d6a4b3299024a8d595fc62.scope.
Nov 25 09:35:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c7c5b333faa13373e8f4fd6b87ad7908c3398a4e566f73308a796dd6824314/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c7c5b333faa13373e8f4fd6b87ad7908c3398a4e566f73308a796dd6824314/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c7c5b333faa13373e8f4fd6b87ad7908c3398a4e566f73308a796dd6824314/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c7c5b333faa13373e8f4fd6b87ad7908c3398a4e566f73308a796dd6824314/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:29 compute-0 podman[106373]: 2025-11-25 09:35:29.485651015 +0000 UTC m=+0.087780529 container init 2f6900d515f9a51de665ea9036310e8453f64b2d49d6a4b3299024a8d595fc62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:35:29 compute-0 podman[106373]: 2025-11-25 09:35:29.490951569 +0000 UTC m=+0.093081073 container start 2f6900d515f9a51de665ea9036310e8453f64b2d49d6a4b3299024a8d595fc62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:35:29 compute-0 podman[106373]: 2025-11-25 09:35:29.492095255 +0000 UTC m=+0.094224749 container attach 2f6900d515f9a51de665ea9036310e8453f64b2d49d6a4b3299024a8d595fc62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 09:35:29 compute-0 podman[106373]: 2025-11-25 09:35:29.415240425 +0000 UTC m=+0.017369939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]: {
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:     "1": [
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:         {
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             "devices": [
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "/dev/loop3"
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             ],
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             "lv_name": "ceph_lv0",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             "lv_size": "21470642176",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             "name": "ceph_lv0",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             "tags": {
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.cluster_name": "ceph",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.crush_device_class": "",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.encrypted": "0",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.osd_id": "1",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.type": "block",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.vdo": "0",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:                 "ceph.with_tpm": "0"
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             },
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             "type": "block",
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:             "vg_name": "ceph_vg0"
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:         }
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]:     ]
Nov 25 09:35:29 compute-0 peaceful_swirles[106386]: }
Nov 25 09:35:29 compute-0 systemd[1]: libpod-2f6900d515f9a51de665ea9036310e8453f64b2d49d6a4b3299024a8d595fc62.scope: Deactivated successfully.
Nov 25 09:35:29 compute-0 conmon[106386]: conmon 2f6900d515f9a51de665 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f6900d515f9a51de665ea9036310e8453f64b2d49d6a4b3299024a8d595fc62.scope/container/memory.events
Nov 25 09:35:29 compute-0 podman[106373]: 2025-11-25 09:35:29.733086955 +0000 UTC m=+0.335216449 container died 2f6900d515f9a51de665ea9036310e8453f64b2d49d6a4b3299024a8d595fc62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:35:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:29 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-84c7c5b333faa13373e8f4fd6b87ad7908c3398a4e566f73308a796dd6824314-merged.mount: Deactivated successfully.
Nov 25 09:35:29 compute-0 podman[106373]: 2025-11-25 09:35:29.755118953 +0000 UTC m=+0.357248448 container remove 2f6900d515f9a51de665ea9036310e8453f64b2d49d6a4b3299024a8d595fc62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:35:29 compute-0 systemd[1]: libpod-conmon-2f6900d515f9a51de665ea9036310e8453f64b2d49d6a4b3299024a8d595fc62.scope: Deactivated successfully.
Nov 25 09:35:29 compute-0 sudo[106283]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:29 compute-0 sudo[106406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:35:29 compute-0 sudo[106406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:29 compute-0 sudo[106406]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:29 compute-0 sudo[106431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:35:29 compute-0 sudo[106431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:35:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:35:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:30 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc006810 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:30 compute-0 podman[106487]: 2025-11-25 09:35:30.14606366 +0000 UTC m=+0.027233617 container create 304473b1deb60799784cf8a82c92aec66e8c52555241fac48895885e5bc51f68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_carson, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:35:30 compute-0 systemd[1]: Started libpod-conmon-304473b1deb60799784cf8a82c92aec66e8c52555241fac48895885e5bc51f68.scope.
Nov 25 09:35:30 compute-0 ceph-mon[74207]: pgmap v26: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:35:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:35:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:30 compute-0 podman[106487]: 2025-11-25 09:35:30.197712156 +0000 UTC m=+0.078882133 container init 304473b1deb60799784cf8a82c92aec66e8c52555241fac48895885e5bc51f68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_carson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:35:30 compute-0 podman[106487]: 2025-11-25 09:35:30.202503991 +0000 UTC m=+0.083673949 container start 304473b1deb60799784cf8a82c92aec66e8c52555241fac48895885e5bc51f68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 25 09:35:30 compute-0 podman[106487]: 2025-11-25 09:35:30.204257818 +0000 UTC m=+0.085427775 container attach 304473b1deb60799784cf8a82c92aec66e8c52555241fac48895885e5bc51f68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 25 09:35:30 compute-0 optimistic_carson[106500]: 167 167
Nov 25 09:35:30 compute-0 podman[106487]: 2025-11-25 09:35:30.205589068 +0000 UTC m=+0.086759024 container died 304473b1deb60799784cf8a82c92aec66e8c52555241fac48895885e5bc51f68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:35:30 compute-0 systemd[1]: libpod-304473b1deb60799784cf8a82c92aec66e8c52555241fac48895885e5bc51f68.scope: Deactivated successfully.
Nov 25 09:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-361d5353771954eaab9ce5372936aa2c6e5b585a65f4e0f93fddb0a195c0dc6d-merged.mount: Deactivated successfully.
Nov 25 09:35:30 compute-0 podman[106487]: 2025-11-25 09:35:30.224481768 +0000 UTC m=+0.105651724 container remove 304473b1deb60799784cf8a82c92aec66e8c52555241fac48895885e5bc51f68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:30 compute-0 podman[106487]: 2025-11-25 09:35:30.134556955 +0000 UTC m=+0.015726922 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:35:30] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Nov 25 09:35:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:35:30] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Nov 25 09:35:30 compute-0 systemd[1]: libpod-conmon-304473b1deb60799784cf8a82c92aec66e8c52555241fac48895885e5bc51f68.scope: Deactivated successfully.
Nov 25 09:35:30 compute-0 podman[106522]: 2025-11-25 09:35:30.33704985 +0000 UTC m=+0.028697105 container create be1f1dc0d3b683c85e137c012df417d516e5ebd6eacdb84a85a48b0d9db32528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 09:35:30 compute-0 systemd[1]: Started libpod-conmon-be1f1dc0d3b683c85e137c012df417d516e5ebd6eacdb84a85a48b0d9db32528.scope.
Nov 25 09:35:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa0d0be728657392c1a5646ddd0fe10b2fc7810e040adbfe3ec542e76999f5d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa0d0be728657392c1a5646ddd0fe10b2fc7810e040adbfe3ec542e76999f5d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa0d0be728657392c1a5646ddd0fe10b2fc7810e040adbfe3ec542e76999f5d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa0d0be728657392c1a5646ddd0fe10b2fc7810e040adbfe3ec542e76999f5d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:35:30 compute-0 podman[106522]: 2025-11-25 09:35:30.402505379 +0000 UTC m=+0.094152654 container init be1f1dc0d3b683c85e137c012df417d516e5ebd6eacdb84a85a48b0d9db32528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_wilson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:35:30 compute-0 podman[106522]: 2025-11-25 09:35:30.407521225 +0000 UTC m=+0.099168479 container start be1f1dc0d3b683c85e137c012df417d516e5ebd6eacdb84a85a48b0d9db32528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:35:30 compute-0 podman[106522]: 2025-11-25 09:35:30.408620618 +0000 UTC m=+0.100267884 container attach be1f1dc0d3b683c85e137c012df417d516e5ebd6eacdb84a85a48b0d9db32528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_wilson, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:30 compute-0 podman[106522]: 2025-11-25 09:35:30.324154908 +0000 UTC m=+0.015802173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:35:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:30.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:30 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc006810 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:30 compute-0 lvm[106611]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:35:30 compute-0 lvm[106611]: VG ceph_vg0 finished
Nov 25 09:35:30 compute-0 ecstatic_wilson[106535]: {}
Nov 25 09:35:30 compute-0 systemd[1]: libpod-be1f1dc0d3b683c85e137c012df417d516e5ebd6eacdb84a85a48b0d9db32528.scope: Deactivated successfully.
Nov 25 09:35:30 compute-0 podman[106522]: 2025-11-25 09:35:30.931265989 +0000 UTC m=+0.622913244 container died be1f1dc0d3b683c85e137c012df417d516e5ebd6eacdb84a85a48b0d9db32528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_wilson, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:35:30 compute-0 lvm[106613]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:35:30 compute-0 lvm[106613]: VG ceph_vg0 finished
Nov 25 09:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa0d0be728657392c1a5646ddd0fe10b2fc7810e040adbfe3ec542e76999f5d1-merged.mount: Deactivated successfully.
Nov 25 09:35:30 compute-0 podman[106522]: 2025-11-25 09:35:30.954738985 +0000 UTC m=+0.646386241 container remove be1f1dc0d3b683c85e137c012df417d516e5ebd6eacdb84a85a48b0d9db32528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_wilson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 25 09:35:30 compute-0 systemd[1]: libpod-conmon-be1f1dc0d3b683c85e137c012df417d516e5ebd6eacdb84a85a48b0d9db32528.scope: Deactivated successfully.
Nov 25 09:35:30 compute-0 sudo[106431]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:35:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:35:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:31 compute-0 sudo[106624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:35:31 compute-0 sudo[106624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:31 compute-0 sudo[106624]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v27: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:35:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:31.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093531 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:35:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:31 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:31 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:31 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:35:31 compute-0 ceph-mon[74207]: pgmap v27: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:35:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:32 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:32.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:32 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v28: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:35:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:35:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:33.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:35:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:35:33.707Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002519826s
Nov 25 09:35:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:33 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:34 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:34 compute-0 ceph-mon[74207]: pgmap v28: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:35:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:34.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:34 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57cc002600 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v29: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:35:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:35:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:35.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:35:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:35 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:36 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:36 compute-0 ceph-mon[74207]: pgmap v29: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:35:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:36.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:36 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v30: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:35:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:37.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:37 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57cc003140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:38 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:38 compute-0 ceph-mon[74207]: pgmap v30: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:35:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:38.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:38 compute-0 sudo[106659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:35:38 compute-0 sudo[106659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:38 compute-0 sudo[106659]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:38 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v31: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:35:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:39.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:39 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:40 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:40 compute-0 ceph-mon[74207]: pgmap v31: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:35:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:35:40] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Nov 25 09:35:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:35:40] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Nov 25 09:35:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:40.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:40 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v32: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:35:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:41.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:41 compute-0 sshd-session[106686]: Accepted publickey for zuul from 192.168.122.30 port 51578 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:35:41 compute-0 systemd-logind[744]: New session 39 of user zuul.
Nov 25 09:35:41 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 25 09:35:41 compute-0 sshd-session[106686]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:35:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:41 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:41 compute-0 python3.9[106840]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 09:35:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:42 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57cc003a60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:42 compute-0 ceph-mon[74207]: pgmap v32: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:35:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:42.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:42 compute-0 python3.9[107015]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:35:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:42 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v33: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:35:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:43.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:43 compute-0 sudo[107169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfuyergchjnkbemkqsgiizrhycufuzju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063343.2444842-93-78489686677122/AnsiballZ_command.py'
Nov 25 09:35:43 compute-0 sudo[107169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:43 compute-0 python3.9[107171]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:35:43 compute-0 sudo[107169]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:43 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:44 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:44 compute-0 ceph-mon[74207]: pgmap v33: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:35:44 compute-0 sudo[107324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dseknccbarzbrtmjzqshzqosebkokrsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063344.0329616-129-117111834384244/AnsiballZ_stat.py'
Nov 25 09:35:44 compute-0 sudo[107324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:44 compute-0 python3.9[107326]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:35:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 09:35:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:44.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 09:35:44 compute-0 sudo[107324]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:35:44
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', '.nfs', '.rgw.root', 'volumes', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms']
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:35:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:44 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57cc003a60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Nov 25 09:35:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:35:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:35:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Nov 25 09:35:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:35:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:35:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:35:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:35:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:35:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:35:45 compute-0 sudo[107478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npeabzxxvxtvnrhppqzczphpjvaxukyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063344.7799926-162-43447572730935/AnsiballZ_file.py'
Nov 25 09:35:45 compute-0 sudo[107478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v34: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:35:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:45.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 25 09:35:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 25 09:35:45 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 25 09:35:45 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev b14c401f-5b47-4516-967a-95654d7859cf (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 25 09:35:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Nov 25 09:35:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:35:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:45 compute-0 python3.9[107480]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:35:45 compute-0 sudo[107478]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:45 compute-0 sudo[107630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmcnuqjcxgkcdoxpedbskwuowdvoisdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063345.4473248-189-33445305948072/AnsiballZ_file.py'
Nov 25 09:35:45 compute-0 sudo[107630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:45 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:45 compute-0 python3.9[107632]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:35:45 compute-0 sudo[107630]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:46 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 25 09:35:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 25 09:35:46 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 25 09:35:46 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 9cd97b4b-b74d-4e4c-9a58-9112cc75d37d (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 25 09:35:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Nov 25 09:35:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:46 compute-0 ceph-mon[74207]: pgmap v34: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:35:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:46 compute-0 ceph-mon[74207]: osdmap e43: 3 total, 3 up, 3 in
Nov 25 09:35:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:46 compute-0 ceph-mon[74207]: osdmap e44: 3 total, 3 up, 3 in
Nov 25 09:35:46 compute-0 python3.9[107784]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:35:46 compute-0 network[107801]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:35:46 compute-0 network[107802]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:35:46 compute-0 network[107803]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:35:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 09:35:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:46.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 09:35:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:46 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v37: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Nov 25 09:35:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Nov 25 09:35:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Nov 25 09:35:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:47.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 25 09:35:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 25 09:35:47 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 25 09:35:47 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 9207ad14-c70e-4f7d-b7aa-8ae63dc16be0 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 25 09:35:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Nov 25 09:35:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:47 compute-0 ceph-mon[74207]: osdmap e45: 3 total, 3 up, 3 in
Nov 25 09:35:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:47 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:48 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 25 09:35:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 25 09:35:48 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 25 09:35:48 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev bf03a278-eac0-4426-8cdc-f2e79ff543bf (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 25 09:35:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Nov 25 09:35:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 25 09:35:48 compute-0 ceph-mon[74207]: pgmap v37: 12 pgs: 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Nov 25 09:35:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:48 compute-0 ceph-mon[74207]: osdmap e46: 3 total, 3 up, 3 in
Nov 25 09:35:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:48.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:48 compute-0 python3.9[108065]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:35:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:48 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57cc003a60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v40: 74 pgs: 62 unknown, 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:35:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Nov 25 09:35:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Nov 25 09:35:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 25 09:35:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 25 09:35:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 25 09:35:49 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 47 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=47 pruub=15.532154083s) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active pruub 216.506668091s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:35:49 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 25 09:35:49 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 47 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=47 pruub=15.532154083s) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown pruub 216.506668091s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:49 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev af3da030-5e69-4fd1-88e0-7ba2a1620183 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 25 09:35:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Nov 25 09:35:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 25 09:35:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 25 09:35:49 compute-0 python3.9[108215]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:35:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:49 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:49 compute-0 ceph-mgr[74476]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Nov 25 09:35:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:50 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 25 09:35:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:35:50] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Nov 25 09:35:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:35:50] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Nov 25 09:35:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 25 09:35:50 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 25 09:35:50 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev b31624cb-0ead-4611-be6d-066a5acd6a80 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 25 09:35:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Nov 25 09:35:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1f( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1e( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1d( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1c( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1b( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1a( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.19( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.8( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.6( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.5( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.4( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.3( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.f( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.d( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.c( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.2( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.9( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.a( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.b( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.e( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.10( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.11( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.13( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.7( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.14( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.15( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.16( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.17( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.18( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.12( empty local-lis/les=15/16 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1e( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1d( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1c( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1b( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1a( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1f( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.19( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.8( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.6( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.5( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.4( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.3( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.1( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.f( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.d( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.c( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.2( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.9( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-mon[74207]: 3.1d scrub starts
Nov 25 09:35:50 compute-0 ceph-mon[74207]: 3.1d scrub ok
Nov 25 09:35:50 compute-0 ceph-mon[74207]: 2.1f scrub starts
Nov 25 09:35:50 compute-0 ceph-mon[74207]: 2.1f scrub ok
Nov 25 09:35:50 compute-0 ceph-mon[74207]: pgmap v40: 74 pgs: 62 unknown, 12 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:35:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:50 compute-0 ceph-mon[74207]: osdmap e47: 3 total, 3 up, 3 in
Nov 25 09:35:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:50 compute-0 ceph-mon[74207]: osdmap e48: 3 total, 3 up, 3 in
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.0( empty local-lis/les=47/48 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.a( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.b( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.e( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.11( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.14( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.10( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.15( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.17( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.16( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.18( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.7( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.12( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 48 pg[4.13( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=15/15 les/c/f=16/16/0 sis=47) [1] r=0 lpr=47 pi=[15,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:50.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:50 compute-0 python3.9[108371]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:35:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:50 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v43: 136 pgs: 31 unknown, 105 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 09:35:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Nov 25 09:35:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Nov 25 09:35:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 25 09:35:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:51.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:51 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 25 09:35:51 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 25 09:35:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 25 09:35:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 25 09:35:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 25 09:35:51 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 25 09:35:51 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 1c893762-4481-4169-90fe-4e2083681837 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 25 09:35:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Nov 25 09:35:51 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:51 compute-0 ceph-mon[74207]: 3.a deep-scrub starts
Nov 25 09:35:51 compute-0 ceph-mon[74207]: 3.a deep-scrub ok
Nov 25 09:35:51 compute-0 ceph-mon[74207]: 2.1d scrub starts
Nov 25 09:35:51 compute-0 ceph-mon[74207]: 2.1d scrub ok
Nov 25 09:35:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 25 09:35:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:51 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 25 09:35:51 compute-0 ceph-mon[74207]: osdmap e49: 3 total, 3 up, 3 in
Nov 25 09:35:51 compute-0 sudo[108527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywicuywgdcuyhujcyupowpjeinbnggyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063351.0766585-333-132897762492949/AnsiballZ_setup.py'
Nov 25 09:35:51 compute-0 sudo[108527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:51 compute-0 python3.9[108529]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:35:51 compute-0 sudo[108527]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:51 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57cc004d90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:52 compute-0 sudo[108613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yitbqxbdlpsskomlfrgzyyvqskayuloq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063351.0766585-333-132897762492949/AnsiballZ_dnf.py'
Nov 25 09:35:52 compute-0 sudo[108613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:52 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:52 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Nov 25 09:35:52 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Nov 25 09:35:52 compute-0 python3.9[108615]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:35:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 25 09:35:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 25 09:35:52 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 25 09:35:52 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev e2cb1773-9f56-410d-b6a9-dbc4601f8524 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 25 09:35:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Nov 25 09:35:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:52 compute-0 ceph-mon[74207]: 3.7 scrub starts
Nov 25 09:35:52 compute-0 ceph-mon[74207]: 3.7 scrub ok
Nov 25 09:35:52 compute-0 ceph-mon[74207]: 2.1b scrub starts
Nov 25 09:35:52 compute-0 ceph-mon[74207]: 2.1b scrub ok
Nov 25 09:35:52 compute-0 ceph-mon[74207]: pgmap v43: 136 pgs: 31 unknown, 105 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 09:35:52 compute-0 ceph-mon[74207]: 4.1e scrub starts
Nov 25 09:35:52 compute-0 ceph-mon[74207]: 4.1e scrub ok
Nov 25 09:35:52 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:52 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:52 compute-0 ceph-mon[74207]: osdmap e50: 3 total, 3 up, 3 in
Nov 25 09:35:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:52.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:52 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v46: 182 pgs: 77 unknown, 105 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 09:35:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Nov 25 09:35:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Nov 25 09:35:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 49 pg[6.0( v 42'42 (0'0,42'42] local-lis/les=17/18 n=22 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=49 pruub=13.584969521s) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 42'41 mlcod 42'41 active pruub 218.518341064s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.0( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=1 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=49 pruub=13.584969521s) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 42'41 mlcod 0'0 unknown pruub 218.518341064s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.1( v 42'42 (0'0,42'42] local-lis/les=17/18 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.2( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.3( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.4( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.5( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.6( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.7( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.8( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.9( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.a( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.c( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.d( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.e( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 50 pg[6.f( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=17/18 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Nov 25 09:35:53 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Nov 25 09:35:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 25 09:35:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 25 09:35:53 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 25 09:35:53 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev abb45e2d-0f31-406c-b2fa-70c81ba0d343 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 25 09:35:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Nov 25 09:35:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[8.0( v 28'12 (0'0,28'12] local-lis/les=27/28 n=6 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=51 pruub=9.696038246s) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 28'11 mlcod 28'11 active pruub 214.688476562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[9.0( v 42'1151 (0'0,42'1151] local-lis/les=29/30 n=178 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=51 pruub=11.705075264s) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 42'1150 mlcod 42'1150 active pruub 216.697830200s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.c( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.8( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-mon[74207]: 3.8 scrub starts
Nov 25 09:35:53 compute-0 ceph-mon[74207]: 3.8 scrub ok
Nov 25 09:35:53 compute-0 ceph-mon[74207]: 2.1a scrub starts
Nov 25 09:35:53 compute-0 ceph-mon[74207]: 2.1a scrub ok
Nov 25 09:35:53 compute-0 ceph-mon[74207]: 4.1c deep-scrub starts
Nov 25 09:35:53 compute-0 ceph-mon[74207]: 4.1c deep-scrub ok
Nov 25 09:35:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:53 compute-0 ceph-mon[74207]: osdmap e51: 3 total, 3 up, 3 in
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.b( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.2( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.0( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 42'41 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.d( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.3( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.1( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.6( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.7( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.4( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.5( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[6.a( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=17/17 les/c/f=18/18/0 sis=49) [1] r=0 lpr=49 pi=[17,49)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[8.0( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=51 pruub=9.696038246s) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 28'11 mlcod 0'0 unknown pruub 214.688476562s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x564fb3b0ed80) operator()   moving buffer(0x564fb3c4f7e8 space 0x564fb3aa0eb0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x564fb3b0ed80) operator()   moving buffer(0x564fb3c3eca8 space 0x564fb39de690 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x564fb3b0ed80) operator()   moving buffer(0x564fb3c3ef28 space 0x564fb39df870 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 51 pg[9.0( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=51 pruub=11.705075264s) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 42'1150 mlcod 0'0 unknown pruub 216.697830200s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c72ca8 space 0x564fb398f6d0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c6ac08 space 0x564fb3b5ec40 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c6a2a8 space 0x564fb3b5e900 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c6bf68 space 0x564fb3b5e420 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c72f28 space 0x564fb3b5f1f0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c4fce8 space 0x564fb3a13ef0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c72028 space 0x564fb3b5e0e0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c94708 space 0x564fb3b5f940 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c7d4c8 space 0x564fb3b5f2c0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c6aa28 space 0x564fb3b5eb70 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c7da68 space 0x564fb3b5f460 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c71ba8 space 0x564fb3b5e1b0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c94b68 space 0x564fb3b5f600 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c7db08 space 0x564fb3b5f390 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c700c8 space 0x564fb2bffa10 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c6a5c8 space 0x564fb3b5ede0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c736a8 space 0x564fb3b5e280 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c0cac8 space 0x564fb3b5fae0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c7c668 space 0x564fb3b5e4f0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c72988 space 0x564fb3b5e690 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c6a7a8 space 0x564fb3b5e9d0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c6b568 space 0x564fb3b5eaa0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c6b888 space 0x564fb3b5ed10 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3ca9c48 space 0x564fb2bffe20 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3ca9748 space 0x564fb3a2e420 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c7d928 space 0x564fb3b5f7a0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c94528 space 0x564fb3b5f530 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c70708 space 0x564fb3b5e760 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c959c8 space 0x564fb3b5f870 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c73108 space 0x564fb3b5e350 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x564fb3b0e480) operator()   moving buffer(0x564fb3c7cf28 space 0x564fb3b5f6d0 0x0~1000 clean)
Nov 25 09:35:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:53 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:54 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57cc004d90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:54 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 25 09:35:54 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 25 09:35:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 25 09:35:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 25 09:35:54 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 25 09:35:54 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 96a24870-1867-4889-b939-63fb6b31cbae (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 25 09:35:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Nov 25 09:35:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.14( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1a( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1b( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.15( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1b( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1a( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.19( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.19( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.18( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1e( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-mon[74207]: 3.b scrub starts
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1f( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-mon[74207]: 3.b scrub ok
Nov 25 09:35:54 compute-0 ceph-mon[74207]: 2.7 deep-scrub starts
Nov 25 09:35:54 compute-0 ceph-mon[74207]: 2.7 deep-scrub ok
Nov 25 09:35:54 compute-0 ceph-mon[74207]: pgmap v46: 182 pgs: 77 unknown, 105 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 09:35:54 compute-0 ceph-mon[74207]: 4.1b deep-scrub starts
Nov 25 09:35:54 compute-0 ceph-mon[74207]: 4.1b deep-scrub ok
Nov 25 09:35:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:54 compute-0 ceph-mon[74207]: osdmap e52: 3 total, 3 up, 3 in
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1f( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1e( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1c( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1d( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1d( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1c( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.3( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.2( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=1 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.6( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.18( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.7( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.7( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.6( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=1 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.4( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.5( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=1 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.d( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.c( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.e( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.f( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1( v 28'12 (0'0,28'12] local-lis/les=27/28 n=1 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.3( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=1 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.c( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.d( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.f( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.e( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.9( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.8( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.8( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.9( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.b( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.a( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.a( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.b( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.2( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.5( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.4( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=1 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.14( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.15( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.17( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.16( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.16( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.17( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.11( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.10( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.10( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.11( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.13( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.12( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.12( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=29/30 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.13( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=27/28 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1b( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1a( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.19( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.14( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1f( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1e( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1c( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1d( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1c( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.3( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.6( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.7( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.7( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.6( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.4( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.5( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.c( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.e( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.0( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 28'11 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.2( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.1( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.1( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.3( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.0( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 42'1150 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.c( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.d( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.f( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.8( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.9( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.b( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.a( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.b( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.e( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.5( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.2( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.4( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.15( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.14( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.16( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.17( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.10( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.11( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.17( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.13( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.18( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.13( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 52 pg[8.12( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=27/27 les/c/f=28/28/0 sis=51) [1] r=0 lpr=51 pi=[27,51)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:54.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:54 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v49: 244 pgs: 139 unknown, 105 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:35:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Nov 25 09:35:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Nov 25 09:35:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:55.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 25 09:35:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:55 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:55 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 25 09:35:55 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 25 09:35:55 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 53 pg[11.0( v 42'2 (0'0,42'2] local-lis/les=33/34 n=2 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=53 pruub=13.379461288s) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 42'1 mlcod 42'1 active pruub 220.377395630s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] update: starting ev 550a33e3-2421-4c34-943c-383d0dea0406 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Nov 25 09:35:55 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 53 pg[11.0( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=53 pruub=13.379461288s) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 42'1 mlcod 0'0 unknown pruub 220.377395630s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev b14c401f-5b47-4516-967a-95654d7859cf (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event b14c401f-5b47-4516-967a-95654d7859cf (PG autoscaler increasing pool 2 PGs from 1 to 32) in 10 seconds
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev 9cd97b4b-b74d-4e4c-9a58-9112cc75d37d (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 9cd97b4b-b74d-4e4c-9a58-9112cc75d37d (PG autoscaler increasing pool 3 PGs from 1 to 32) in 9 seconds
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev 9207ad14-c70e-4f7d-b7aa-8ae63dc16be0 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 9207ad14-c70e-4f7d-b7aa-8ae63dc16be0 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 8 seconds
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev bf03a278-eac0-4426-8cdc-f2e79ff543bf (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event bf03a278-eac0-4426-8cdc-f2e79ff543bf (PG autoscaler increasing pool 5 PGs from 1 to 32) in 7 seconds
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev af3da030-5e69-4fd1-88e0-7ba2a1620183 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event af3da030-5e69-4fd1-88e0-7ba2a1620183 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 6 seconds
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev b31624cb-0ead-4611-be6d-066a5acd6a80 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event b31624cb-0ead-4611-be6d-066a5acd6a80 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev 1c893762-4481-4169-90fe-4e2083681837 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 1c893762-4481-4169-90fe-4e2083681837 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev e2cb1773-9f56-410d-b6a9-dbc4601f8524 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event e2cb1773-9f56-410d-b6a9-dbc4601f8524 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev abb45e2d-0f31-406c-b2fa-70c81ba0d343 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event abb45e2d-0f31-406c-b2fa-70c81ba0d343 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev 96a24870-1867-4889-b939-63fb6b31cbae (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 96a24870-1867-4889-b939-63fb6b31cbae (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] complete: finished ev 550a33e3-2421-4c34-943c-383d0dea0406 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Nov 25 09:35:55 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 550a33e3-2421-4c34-943c-383d0dea0406 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Nov 25 09:35:55 compute-0 ceph-mon[74207]: 3.4 scrub starts
Nov 25 09:35:55 compute-0 ceph-mon[74207]: 3.4 scrub ok
Nov 25 09:35:55 compute-0 ceph-mon[74207]: 2.19 scrub starts
Nov 25 09:35:55 compute-0 ceph-mon[74207]: 2.19 scrub ok
Nov 25 09:35:55 compute-0 ceph-mon[74207]: 4.1d scrub starts
Nov 25 09:35:55 compute-0 ceph-mon[74207]: 4.1d scrub ok
Nov 25 09:35:55 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 09:35:55 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:55 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:55 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 25 09:35:55 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:55 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:55 compute-0 ceph-mon[74207]: osdmap e53: 3 total, 3 up, 3 in
Nov 25 09:35:55 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 25 09:35:55 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 25 09:35:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:55 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:56 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 25 09:35:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 25 09:35:56 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 25 09:35:56 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.10( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.11( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.12( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.13( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.14( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.15( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.16( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.7( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.8( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.a( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.b( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.9( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.c( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.e( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.2( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=1 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.3( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.f( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.d( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.6( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.5( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.4( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1( v 42'2 (0'0,42'2] local-lis/les=33/34 n=1 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1f( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1e( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1d( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1c( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1b( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1a( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.19( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.18( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.17( v 42'2 lc 0'0 (0'0,42'2] local-lis/les=33/34 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.10( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.11( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.12( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.13( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.15( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.16( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.14( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.8( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.b( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.c( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.0( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 42'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.e( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.3( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.9( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.f( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.6( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.2( v 42'2 (0'0,42'2] local-lis/les=53/54 n=1 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.7( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.a( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.5( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1( v 42'2 (0'0,42'2] local-lis/les=53/54 n=1 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1f( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.4( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1e( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1c( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.d( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1a( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1d( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.18( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.17( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.19( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 54 pg[11.1b( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=33/33 les/c/f=34/34/0 sis=53) [1] r=0 lpr=53 pi=[33,53)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:35:56 compute-0 ceph-mon[74207]: 3.5 scrub starts
Nov 25 09:35:56 compute-0 ceph-mon[74207]: 3.5 scrub ok
Nov 25 09:35:56 compute-0 ceph-mon[74207]: 2.1 deep-scrub starts
Nov 25 09:35:56 compute-0 ceph-mon[74207]: 2.1 deep-scrub ok
Nov 25 09:35:56 compute-0 ceph-mon[74207]: pgmap v49: 244 pgs: 139 unknown, 105 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:35:56 compute-0 ceph-mon[74207]: 4.1a scrub starts
Nov 25 09:35:56 compute-0 ceph-mon[74207]: 4.1a scrub ok
Nov 25 09:35:56 compute-0 ceph-mon[74207]: osdmap e54: 3 total, 3 up, 3 in
Nov 25 09:35:56 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 25 09:35:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000006s ======
Nov 25 09:35:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:56.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000006s
Nov 25 09:35:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:56 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57cc005aa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v52: 306 pgs: 1 peering, 62 unknown, 243 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:35:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Nov 25 09:35:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:57.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:57 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 25 09:35:57 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 25 09:35:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 25 09:35:57 compute-0 ceph-mon[74207]: 3.1e scrub starts
Nov 25 09:35:57 compute-0 ceph-mon[74207]: 3.1e scrub ok
Nov 25 09:35:57 compute-0 ceph-mon[74207]: 2.2 scrub starts
Nov 25 09:35:57 compute-0 ceph-mon[74207]: 2.2 scrub ok
Nov 25 09:35:57 compute-0 ceph-mon[74207]: 4.19 scrub starts
Nov 25 09:35:57 compute-0 ceph-mon[74207]: 4.19 scrub ok
Nov 25 09:35:57 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 09:35:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 25 09:35:57 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 25 09:35:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:35:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:57 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57bc007910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:58 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:58 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Nov 25 09:35:58 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Nov 25 09:35:58 compute-0 ceph-mon[74207]: 3.1f scrub starts
Nov 25 09:35:58 compute-0 ceph-mon[74207]: 3.1f scrub ok
Nov 25 09:35:58 compute-0 ceph-mon[74207]: 2.8 scrub starts
Nov 25 09:35:58 compute-0 ceph-mon[74207]: 2.8 scrub ok
Nov 25 09:35:58 compute-0 ceph-mon[74207]: pgmap v52: 306 pgs: 1 peering, 62 unknown, 243 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:35:58 compute-0 ceph-mon[74207]: 4.6 scrub starts
Nov 25 09:35:58 compute-0 ceph-mon[74207]: 4.6 scrub ok
Nov 25 09:35:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 09:35:58 compute-0 ceph-mon[74207]: osdmap e55: 3 total, 3 up, 3 in
Nov 25 09:35:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:58.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:58 compute-0 sudo[108675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:35:58 compute-0 sudo[108675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:35:58 compute-0 sudo[108675]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:58 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v54: 337 pgs: 1 peering, 93 unknown, 243 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:35:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:35:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:35:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:59.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:35:59 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 25 09:35:59 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 25 09:35:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 25 09:35:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 25 09:35:59 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 25 09:35:59 compute-0 ceph-mon[74207]: 3.3 scrub starts
Nov 25 09:35:59 compute-0 ceph-mon[74207]: 3.3 scrub ok
Nov 25 09:35:59 compute-0 ceph-mon[74207]: 2.5 scrub starts
Nov 25 09:35:59 compute-0 ceph-mon[74207]: 2.5 scrub ok
Nov 25 09:35:59 compute-0 ceph-mon[74207]: 4.5 deep-scrub starts
Nov 25 09:35:59 compute-0 ceph-mon[74207]: 4.5 deep-scrub ok
Nov 25 09:35:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:35:59 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:35:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:35:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:35:59 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 25 completed events
Nov 25 09:35:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:35:59 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:36:00 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:00 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 25 09:36:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:36:00] "GET /metrics HTTP/1.1" 200 48360 "" "Prometheus/2.51.0"
Nov 25 09:36:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:36:00] "GET /metrics HTTP/1.1" 200 48360 "" "Prometheus/2.51.0"
Nov 25 09:36:00 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 25 09:36:00 compute-0 ceph-mon[74207]: 3.2 scrub starts
Nov 25 09:36:00 compute-0 ceph-mon[74207]: 3.2 scrub ok
Nov 25 09:36:00 compute-0 ceph-mon[74207]: 2.0 scrub starts
Nov 25 09:36:00 compute-0 ceph-mon[74207]: 2.0 scrub ok
Nov 25 09:36:00 compute-0 ceph-mon[74207]: pgmap v54: 337 pgs: 1 peering, 93 unknown, 243 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:36:00 compute-0 ceph-mon[74207]: 4.4 scrub starts
Nov 25 09:36:00 compute-0 ceph-mon[74207]: 4.4 scrub ok
Nov 25 09:36:00 compute-0 ceph-mon[74207]: osdmap e56: 3 total, 3 up, 3 in
Nov 25 09:36:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:36:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:00.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[100571]: 25/11/2025 09:36:00 : epoch 6925783f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f57c000d010 fd 37 proxy ignored for local
Nov 25 09:36:00 compute-0 kernel: ganesha.nfsd[100662]: segfault at 50 ip 00007f586e8d132e sp 00007f583e7fb210 error 4 in libntirpc.so.5.8[7f586e8b6000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 25 09:36:00 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:36:00 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 25 09:36:00 compute-0 systemd[1]: Started Process Core Dump (PID 108716/UID 0).
Nov 25 09:36:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v56: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 523 B/s rd, 0 op/s
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:01.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:01 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Nov 25 09:36:01 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 25 09:36:01 compute-0 ceph-mon[74207]: 3.c scrub starts
Nov 25 09:36:01 compute-0 ceph-mon[74207]: 3.c scrub ok
Nov 25 09:36:01 compute-0 ceph-mon[74207]: 2.3 deep-scrub starts
Nov 25 09:36:01 compute-0 ceph-mon[74207]: 2.3 deep-scrub ok
Nov 25 09:36:01 compute-0 ceph-mon[74207]: 4.3 scrub starts
Nov 25 09:36:01 compute-0 ceph-mon[74207]: 4.3 scrub ok
Nov 25 09:36:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:36:01 compute-0 ceph-mon[74207]: 4.1 deep-scrub starts
Nov 25 09:36:01 compute-0 ceph-mon[74207]: 4.1 deep-scrub ok
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 25 09:36:01 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[2.1e( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.1b( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[12.10( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[12.1c( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.10( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[12.19( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.13( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[12.6( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.8( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[10.5( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.9( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.3( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[2.6( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[12.8( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[2.4( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[12.a( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.f( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[10.2( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.e( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[2.9( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.2( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[12.b( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[12.c( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.6( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[12.e( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.4( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[2.1( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.b( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[2.e( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[2.1f( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[12.12( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.18( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[7.1e( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[2.19( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.1f( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.918549538s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983825684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.1f( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.918530464s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983825684s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.12( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.939282417s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.004791260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.12( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.939270973s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.004791260s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.12( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.941215515s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.006958008s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.12( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.941204071s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.006958008s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.11( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.935851097s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001800537s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.11( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.935838699s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001800537s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.1d( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.917560577s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983688354s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.1d( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.917544365s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983688354s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.13( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.940687180s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.006973267s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.13( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.940675735s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.006973267s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.10( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.935274124s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001785278s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.10( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.935263634s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001785278s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.1c( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.916941643s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983688354s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.1c( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.916925430s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983688354s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.14( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.940332413s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.007278442s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.14( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.940320969s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.007278442s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.1b( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.916611671s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983688354s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.1b( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.916601181s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983688354s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.17( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.934562683s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001754761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.17( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.934553146s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001754761s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.1a( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.916495323s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983810425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.16( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.934159279s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001754761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.16( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.934146881s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001754761s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.19( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.915963173s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983810425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.19( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.915951729s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983810425s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.1a( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.916485786s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983810425s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.4( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.933608055s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001693726s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.4( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.933597565s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001693726s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.16( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.938930511s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.007232666s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.16( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.938918114s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.007232666s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.7( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.939018250s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.007507324s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.7( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.939006805s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.007507324s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.15( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.933089256s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001708984s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.15( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.933077812s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001708984s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.5( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.925557137s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 228.994354248s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.5( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.925541878s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.994354248s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.b( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.932650566s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001663208s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.b( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.932641029s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001663208s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.8( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.915635109s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983825684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.8( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.914771080s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983825684s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.6( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.914490700s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983825684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.6( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.914479256s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983825684s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.a( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.932216644s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001632690s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.a( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.932208061s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001632690s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.7( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.924605370s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 228.994262695s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.7( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.924593925s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.994262695s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.5( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.914041519s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983856201s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.5( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.914032936s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983856201s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.9( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.931712151s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001586914s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.9( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.931704521s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001586914s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.a( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.937545776s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.007522583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.a( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.937537193s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.007522583s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.1( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.923906326s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 228.994262695s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.1( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.923894882s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.994262695s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.3( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.913424492s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983871460s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.3( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.913414955s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983871460s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.8( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.938138962s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.007278442s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.8( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.936676025s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.007278442s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.3( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.923422813s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 228.994262695s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.3( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.923412323s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.994262695s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.1( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.912958145s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983871460s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.1( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.912949562s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983871460s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.d( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.930015564s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001007080s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.d( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.930006981s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001007080s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.e( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.936299324s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.007354736s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.e( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.936291695s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.007354736s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.3( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.929793358s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.000915527s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.3( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.929785728s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.000915527s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.d( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.922998428s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 228.994186401s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.d( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.922989845s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.994186401s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.922873497s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 228.994186401s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.922863960s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.994186401s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.d( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.912533760s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983917236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.d( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.912526131s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983917236s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.c( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.912006378s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983917236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.c( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.911995888s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983917236s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.3( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.935356140s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.007369995s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.3( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.935347557s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.007369995s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.f( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.928869247s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001022339s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.f( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.928856850s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001022339s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.f( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.935104370s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.007461548s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.f( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.935092926s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.007461548s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.2( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.911417007s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.983932495s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.2( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.911407471s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.983932495s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.c( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.927835464s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.000640869s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.c( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.927823067s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.000640869s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.5( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.927673340s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.000595093s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.5( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.927663803s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.000595093s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.b( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.921117783s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 228.994125366s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.b( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.921109200s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.994125366s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.9( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.911540031s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.984619141s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.9( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.911532402s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.984619141s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.6( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.927395821s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.000579834s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.6( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.927387238s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.000579834s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.5( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.933791161s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.007537842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.5( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.933779716s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.007537842s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.8( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.927758217s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001586914s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.8( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.927746773s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001586914s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.4( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.935078621s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.009063721s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.4( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.935069084s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.009063721s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.2( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.926795006s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.000839233s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.2( v 28'12 (0'0,28'12] local-lis/les=51/52 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.926786423s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.000839233s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.e( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.910508156s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.984649658s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.e( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.910500526s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.984649658s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.1( v 42'2 (0'0,42'2] local-lis/les=53/54 n=1 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.933405876s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.007598877s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.1( v 42'2 (0'0,42'2] local-lis/les=53/54 n=1 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.933397293s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.007598877s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.919651031s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 228.994049072s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.919638634s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.994049072s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.a( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.909893036s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.984619141s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.a( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.909692764s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.984619141s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.1c( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.925187111s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.000274658s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.1c( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.925176620s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.000274658s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.1e( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.933814049s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.009094238s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.1e( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.933802605s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.009094238s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.1d( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.933704376s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.009185791s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.1d( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.933688164s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.009185791s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.1c( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.933390617s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.009124756s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.1c( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.933376312s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.009124756s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.13( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.909681320s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.985580444s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.1f( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.924123764s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.000152588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.1f( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.924113274s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.000152588s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.1b( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.933236122s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.009460449s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.1b( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.933213234s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.009460449s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.14( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.908220291s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.984695435s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.14( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.908208847s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.984695435s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.18( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.925171852s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.001846313s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.18( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.925161362s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.001846313s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.1a( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.932271004s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.009140015s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.15( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.907727242s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.984710693s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.15( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.907715797s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.984710693s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.1b( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.922806740s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.000030518s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.1b( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.922795296s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.000030518s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.17( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.931870461s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.009201050s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.17( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.931861877s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.009201050s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.18( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.908079147s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 225.985488892s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.18( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.908070564s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.985488892s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.14( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.922629356s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.000122070s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.14( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.922620773s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.000122070s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.19( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.921762466s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active pruub 222.000106812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[8.19( v 28'12 (0'0,28'12] local-lis/les=51/52 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.921738625s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.000106812s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.1a( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.932259560s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.009140015s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.19( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.930548668s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active pruub 224.009155273s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[11.19( v 42'2 (0'0,42'2] local-lis/les=53/54 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.930537224s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.009155273s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[4.13( empty local-lis/les=47/48 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.906910896s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 225.985580444s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[5.1d( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[3.19( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[5.1e( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[3.18( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[3.17( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[3.12( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[5.14( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[5.17( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[5.a( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[5.6( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[3.1( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[3.6( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[3.2( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[5.5( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[3.4( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[5.3( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[5.c( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[3.b( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[3.1e( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[5.19( empty local-lis/les=0/0 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[3.1f( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 57 pg[3.7( empty local-lis/les=0/0 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:01 compute-0 systemd-coredump[108717]: Process 100575 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007f586e8d132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:36:02 compute-0 systemd[1]: systemd-coredump@0-108716-0.service: Deactivated successfully.
Nov 25 09:36:02 compute-0 podman[108729]: 2025-11-25 09:36:02.092032239 +0000 UTC m=+0.025863350 container died f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1df7d68d109352c6fe1c1b64ef69906d11c3f494c2902523e11aaa7e1e1ad3dc-merged.mount: Deactivated successfully.
Nov 25 09:36:02 compute-0 podman[108729]: 2025-11-25 09:36:02.116789237 +0000 UTC m=+0.050620326 container remove f69bb007e1ed952e826a397a58e40b84c9140e5aa799847ab1b48b90e7387195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:36:02 compute-0 systemd[92827]: Starting Mark boot as successful...
Nov 25 09:36:02 compute-0 systemd[92827]: Finished Mark boot as successful.
Nov 25 09:36:02 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:36:02 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:36:02 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.10 deep-scrub starts
Nov 25 09:36:02 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.10 deep-scrub ok
Nov 25 09:36:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 25 09:36:02 compute-0 ceph-mon[74207]: 3.d scrub starts
Nov 25 09:36:02 compute-0 ceph-mon[74207]: 3.d scrub ok
Nov 25 09:36:02 compute-0 ceph-mon[74207]: 2.9 scrub starts
Nov 25 09:36:02 compute-0 ceph-mon[74207]: 2.9 scrub ok
Nov 25 09:36:02 compute-0 ceph-mon[74207]: pgmap v56: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 523 B/s rd, 0 op/s
Nov 25 09:36:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 09:36:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 09:36:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:36:02 compute-0 ceph-mon[74207]: osdmap e57: 3 total, 3 up, 3 in
Nov 25 09:36:02 compute-0 ceph-mon[74207]: 11.10 deep-scrub starts
Nov 25 09:36:02 compute-0 ceph-mon[74207]: 11.10 deep-scrub ok
Nov 25 09:36:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 25 09:36:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[5.1e( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[3.18( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[2.19( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[3.19( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[10.13( v 42'96 (0'0,42'96] local-lis/les=57/58 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.1e( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[5.1d( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[10.15( v 54'99 lc 42'78 (0'0,54'99] local-lis/les=57/58 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=54'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[12.12( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[10.14( v 54'99 lc 42'86 (0'0,54'99] local-lis/les=57/58 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=54'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[3.1e( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[2.1f( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[2.e( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.b( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[5.6( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[2.1( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[3.1( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[12.e( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[10.8( v 42'96 (0'0,42'96] local-lis/les=57/58 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[3.2( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[5.5( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[12.c( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[3.4( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[12.b( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[3.6( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[2.9( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[3.b( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[10.2( v 42'96 (0'0,42'96] local-lis/les=57/58 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[5.3( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[12.a( v 42'64 lc 0'0 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=42'64 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[2.4( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[3.7( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[12.8( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[2.6( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[5.c( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[5.a( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[10.5( v 42'96 (0'0,42'96] local-lis/les=57/58 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[12.6( v 42'64 lc 42'45 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[3.17( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[12.19( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.10( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[12.1c( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[5.14( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[3.12( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[10.1b( v 42'96 (0'0,42'96] local-lis/les=57/58 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[5.17( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[10.18( v 42'96 (0'0,42'96] local-lis/les=57/58 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[10.19( v 42'96 (0'0,42'96] local-lis/les=57/58 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[3.1f( empty local-lis/les=57/58 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[2.1e( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[5.19( empty local-lis/les=57/58 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 58 pg[12.10( v 56'65 lc 42'47 (0'0,56'65] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:02.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v59: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 526 B/s rd, 0 op/s
Nov 25 09:36:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Nov 25 09:36:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 09:36:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Nov 25 09:36:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 09:36:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 09:36:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:03.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 09:36:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 25 09:36:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 09:36:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 09:36:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 25 09:36:03 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 25 09:36:03 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 59 pg[6.a( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.928671837s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 228.994354248s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:03 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 59 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.928322792s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 228.994186401s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:03 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 59 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.928300858s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.994186401s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:03 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 59 pg[6.a( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.928314209s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.994354248s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:03 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 59 pg[6.2( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.927848816s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 228.994140625s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:03 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 59 pg[6.2( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.927831650s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.994140625s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:03 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 59 pg[6.6( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.927640915s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 228.994262695s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:03 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 59 pg[6.6( v 42'42 (0'0,42'42] local-lis/les=49/51 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.927621841s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.994262695s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:03 compute-0 ceph-mon[74207]: 3.1b scrub starts
Nov 25 09:36:03 compute-0 ceph-mon[74207]: 3.1b scrub ok
Nov 25 09:36:03 compute-0 ceph-mon[74207]: 10.17 deep-scrub starts
Nov 25 09:36:03 compute-0 ceph-mon[74207]: 10.17 deep-scrub ok
Nov 25 09:36:03 compute-0 ceph-mon[74207]: osdmap e58: 3 total, 3 up, 3 in
Nov 25 09:36:03 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 09:36:03 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 09:36:03 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 25 09:36:03 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 25 09:36:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 25 09:36:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 25 09:36:04 compute-0 ceph-mon[74207]: 7.1a deep-scrub starts
Nov 25 09:36:04 compute-0 ceph-mon[74207]: 7.1a deep-scrub ok
Nov 25 09:36:04 compute-0 ceph-mon[74207]: pgmap v59: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 526 B/s rd, 0 op/s
Nov 25 09:36:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 09:36:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 09:36:04 compute-0 ceph-mon[74207]: osdmap e59: 3 total, 3 up, 3 in
Nov 25 09:36:04 compute-0 ceph-mon[74207]: 8.13 scrub starts
Nov 25 09:36:04 compute-0 ceph-mon[74207]: 8.13 scrub ok
Nov 25 09:36:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 25 09:36:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 25 09:36:04 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 25 09:36:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:04.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:04 compute-0 ceph-mgr[74476]: [progress INFO root] Completed event 68983ef9-b030-4877-a3cf-d730ee6bbcb6 (Global Recovery Event) in 15 seconds
Nov 25 09:36:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v62: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:36:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Nov 25 09:36:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 09:36:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Nov 25 09:36:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 09:36:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:05.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:05 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Nov 25 09:36:05 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Nov 25 09:36:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 25 09:36:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 09:36:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 09:36:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 25 09:36:05 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 25 09:36:05 compute-0 ceph-mon[74207]: 10.16 scrub starts
Nov 25 09:36:05 compute-0 ceph-mon[74207]: 10.16 scrub ok
Nov 25 09:36:05 compute-0 ceph-mon[74207]: 11.11 scrub starts
Nov 25 09:36:05 compute-0 ceph-mon[74207]: 11.11 scrub ok
Nov 25 09:36:05 compute-0 ceph-mon[74207]: osdmap e60: 3 total, 3 up, 3 in
Nov 25 09:36:05 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 09:36:05 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 09:36:05 compute-0 ceph-mon[74207]: 11.15 deep-scrub starts
Nov 25 09:36:05 compute-0 ceph-mon[74207]: 11.15 deep-scrub ok
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.13( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.251905441s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 230.001937866s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.13( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.251879692s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.001937866s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.17( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.251716614s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 230.001922607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.17( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.251687050s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.001922607s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.b( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.250897408s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 230.001708984s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.b( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.250865936s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.001708984s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.249753952s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 230.000747681s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.249742508s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.000747681s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.249185562s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 230.000289917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.249175072s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.000289917s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.3( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.249142647s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 230.000396729s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.3( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.249131203s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.000396729s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.7( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.249129295s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 230.000595093s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.7( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.249108315s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.000595093s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.248538971s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 230.000167847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 61 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=12.248526573s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.000167847s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:06 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 25 09:36:06 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 25 09:36:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 25 09:36:06 compute-0 ceph-mon[74207]: 8.f scrub starts
Nov 25 09:36:06 compute-0 ceph-mon[74207]: 8.f scrub ok
Nov 25 09:36:06 compute-0 ceph-mon[74207]: 7.19 scrub starts
Nov 25 09:36:06 compute-0 ceph-mon[74207]: 7.19 scrub ok
Nov 25 09:36:06 compute-0 ceph-mon[74207]: pgmap v62: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:36:06 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 09:36:06 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 09:36:06 compute-0 ceph-mon[74207]: osdmap e61: 3 total, 3 up, 3 in
Nov 25 09:36:06 compute-0 ceph-mon[74207]: 4.7 scrub starts
Nov 25 09:36:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 25 09:36:06 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.3( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.3( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.7( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.7( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.b( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.b( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.13( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.13( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.17( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:06 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 62 pg[9.17( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:06.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093606 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:36:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v65: 337 pgs: 4 peering, 333 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 465 B/s, 3 keys/s, 6 objects/s recovering
Nov 25 09:36:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:07.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:07 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.9 deep-scrub starts
Nov 25 09:36:07 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.9 deep-scrub ok
Nov 25 09:36:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 25 09:36:07 compute-0 ceph-mon[74207]: 8.6 scrub starts
Nov 25 09:36:07 compute-0 ceph-mon[74207]: 8.6 scrub ok
Nov 25 09:36:07 compute-0 ceph-mon[74207]: 12.15 deep-scrub starts
Nov 25 09:36:07 compute-0 ceph-mon[74207]: 12.15 deep-scrub ok
Nov 25 09:36:07 compute-0 ceph-mon[74207]: 4.7 scrub ok
Nov 25 09:36:07 compute-0 ceph-mon[74207]: osdmap e62: 3 total, 3 up, 3 in
Nov 25 09:36:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 25 09:36:07 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 25 09:36:07 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 63 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:07 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 63 pg[9.b( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:07 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 63 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:07 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 63 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:07 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 63 pg[9.7( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:07 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 63 pg[9.13( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:07 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 63 pg[9.3( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:07 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 63 pg[9.17( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[51,62)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:08 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.b deep-scrub starts
Nov 25 09:36:08 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.b deep-scrub ok
Nov 25 09:36:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 25 09:36:08 compute-0 ceph-mon[74207]: 12.1a scrub starts
Nov 25 09:36:08 compute-0 ceph-mon[74207]: 12.1a scrub ok
Nov 25 09:36:08 compute-0 ceph-mon[74207]: 12.14 scrub starts
Nov 25 09:36:08 compute-0 ceph-mon[74207]: 12.14 scrub ok
Nov 25 09:36:08 compute-0 ceph-mon[74207]: pgmap v65: 337 pgs: 4 peering, 333 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 465 B/s, 3 keys/s, 6 objects/s recovering
Nov 25 09:36:08 compute-0 ceph-mon[74207]: 11.9 deep-scrub starts
Nov 25 09:36:08 compute-0 ceph-mon[74207]: 11.9 deep-scrub ok
Nov 25 09:36:08 compute-0 ceph-mon[74207]: osdmap e63: 3 total, 3 up, 3 in
Nov 25 09:36:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 25 09:36:08 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.3( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.268909454s) [2] async=[2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 235.388381958s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.3( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.268857002s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.388381958s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.b( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.268386841s) [2] async=[2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 235.388122559s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.b( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.268358231s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.388122559s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.17( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.268578529s) [2] async=[2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 235.388397217s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.17( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.268509865s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.388397217s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.13( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.268304825s) [2] async=[2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 235.388336182s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.13( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.268274307s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.388336182s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.267903328s) [2] async=[2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 235.388214111s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.267808914s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.388214111s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.7( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.267349243s) [2] async=[2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 235.388290405s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.7( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.267319679s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.388290405s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.266914368s) [2] async=[2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 235.388259888s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.266884804s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.388259888s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.261829376s) [2] async=[2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 235.383682251s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:08 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 64 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=62/63 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64 pruub=15.261721611s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.383682251s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000006s ======
Nov 25 09:36:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:08.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000006s
Nov 25 09:36:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v68: 337 pgs: 4 peering, 333 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 465 B/s, 3 keys/s, 6 objects/s recovering
Nov 25 09:36:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:09.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:09 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 25 09:36:09 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 25 09:36:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 25 09:36:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 25 09:36:09 compute-0 ceph-mon[74207]: 12.18 scrub starts
Nov 25 09:36:09 compute-0 ceph-mon[74207]: 12.18 scrub ok
Nov 25 09:36:09 compute-0 ceph-mon[74207]: 7.1c deep-scrub starts
Nov 25 09:36:09 compute-0 ceph-mon[74207]: 7.1c deep-scrub ok
Nov 25 09:36:09 compute-0 ceph-mon[74207]: 11.b deep-scrub starts
Nov 25 09:36:09 compute-0 ceph-mon[74207]: 11.b deep-scrub ok
Nov 25 09:36:09 compute-0 ceph-mon[74207]: osdmap e64: 3 total, 3 up, 3 in
Nov 25 09:36:09 compute-0 ceph-mon[74207]: 11.c scrub starts
Nov 25 09:36:09 compute-0 ceph-mon[74207]: 11.c scrub ok
Nov 25 09:36:09 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 25 09:36:09 compute-0 ceph-mgr[74476]: [progress INFO root] Writing back 26 completed events
Nov 25 09:36:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 25 09:36:09 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:36:10] "GET /metrics HTTP/1.1" 200 48360 "" "Prometheus/2.51.0"
Nov 25 09:36:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:36:10] "GET /metrics HTTP/1.1" 200 48360 "" "Prometheus/2.51.0"
Nov 25 09:36:10 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Nov 25 09:36:10 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Nov 25 09:36:10 compute-0 ceph-mon[74207]: 3.1a scrub starts
Nov 25 09:36:10 compute-0 ceph-mon[74207]: 3.1a scrub ok
Nov 25 09:36:10 compute-0 ceph-mon[74207]: 10.e scrub starts
Nov 25 09:36:10 compute-0 ceph-mon[74207]: 10.e scrub ok
Nov 25 09:36:10 compute-0 ceph-mon[74207]: pgmap v68: 337 pgs: 4 peering, 333 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 465 B/s, 3 keys/s, 6 objects/s recovering
Nov 25 09:36:10 compute-0 ceph-mon[74207]: osdmap e65: 3 total, 3 up, 3 in
Nov 25 09:36:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:10 compute-0 ceph-mon[74207]: 11.0 scrub starts
Nov 25 09:36:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:10.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v70: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 386 B/s, 1 keys/s, 9 objects/s recovering
Nov 25 09:36:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Nov 25 09:36:11 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 09:36:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Nov 25 09:36:11 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 09:36:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:11.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:11 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.f deep-scrub starts
Nov 25 09:36:11 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.f deep-scrub ok
Nov 25 09:36:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 25 09:36:11 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 09:36:11 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 09:36:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 25 09:36:11 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 25 09:36:11 compute-0 ceph-mon[74207]: 3.15 scrub starts
Nov 25 09:36:11 compute-0 ceph-mon[74207]: 3.15 scrub ok
Nov 25 09:36:11 compute-0 ceph-mon[74207]: 7.1 scrub starts
Nov 25 09:36:11 compute-0 ceph-mon[74207]: 7.1 scrub ok
Nov 25 09:36:11 compute-0 ceph-mon[74207]: 11.0 scrub ok
Nov 25 09:36:11 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 09:36:11 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 09:36:11 compute-0 ceph-mon[74207]: 4.f deep-scrub starts
Nov 25 09:36:11 compute-0 ceph-mon[74207]: 4.f deep-scrub ok
Nov 25 09:36:12 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 25 09:36:12 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 25 09:36:12 compute-0 ceph-mon[74207]: 5.13 scrub starts
Nov 25 09:36:12 compute-0 ceph-mon[74207]: 5.13 scrub ok
Nov 25 09:36:12 compute-0 ceph-mon[74207]: 10.c scrub starts
Nov 25 09:36:12 compute-0 ceph-mon[74207]: 10.c scrub ok
Nov 25 09:36:12 compute-0 ceph-mon[74207]: pgmap v70: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 386 B/s, 1 keys/s, 9 objects/s recovering
Nov 25 09:36:12 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 09:36:12 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 09:36:12 compute-0 ceph-mon[74207]: osdmap e66: 3 total, 3 up, 3 in
Nov 25 09:36:12 compute-0 ceph-mon[74207]: 8.1 scrub starts
Nov 25 09:36:12 compute-0 ceph-mon[74207]: 8.1 scrub ok
Nov 25 09:36:12 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 1.
Nov 25 09:36:12 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:36:12 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:36:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:12.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:12 compute-0 podman[108856]: 2025-11-25 09:36:12.574979258 +0000 UTC m=+0.032201868 container create 2fef05441996c67a6cc0b95ce6ad83d10e00a4df6c227d7d108d23ac7279dd8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d097a945a5381c4285fe6ab24fca8dfb1b2fd891fd97c5ceca70b36e24d9ed8b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d097a945a5381c4285fe6ab24fca8dfb1b2fd891fd97c5ceca70b36e24d9ed8b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d097a945a5381c4285fe6ab24fca8dfb1b2fd891fd97c5ceca70b36e24d9ed8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d097a945a5381c4285fe6ab24fca8dfb1b2fd891fd97c5ceca70b36e24d9ed8b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:12 compute-0 podman[108856]: 2025-11-25 09:36:12.6250975 +0000 UTC m=+0.082320130 container init 2fef05441996c67a6cc0b95ce6ad83d10e00a4df6c227d7d108d23ac7279dd8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:36:12 compute-0 podman[108856]: 2025-11-25 09:36:12.629152351 +0000 UTC m=+0.086374950 container start 2fef05441996c67a6cc0b95ce6ad83d10e00a4df6c227d7d108d23ac7279dd8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:36:12 compute-0 bash[108856]: 2fef05441996c67a6cc0b95ce6ad83d10e00a4df6c227d7d108d23ac7279dd8d
Nov 25 09:36:12 compute-0 podman[108856]: 2025-11-25 09:36:12.560358044 +0000 UTC m=+0.017580674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:36:12 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:36:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:12 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:36:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:12 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:36:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:12 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:36:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:12 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:36:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:12 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:36:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:12 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:36:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:12 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:36:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:12 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:36:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v72: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 319 B/s, 1 keys/s, 7 objects/s recovering
Nov 25 09:36:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Nov 25 09:36:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 09:36:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Nov 25 09:36:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 09:36:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:13.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:13 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Nov 25 09:36:13 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Nov 25 09:36:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 25 09:36:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 09:36:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 09:36:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 25 09:36:13 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 25 09:36:13 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 67 pg[9.5( v 54'1154 (0'0,54'1154] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=12.853592873s) [2] r=-1 lpr=67 pi=[51,67)/1 crt=53'1152 lcod 53'1153 mlcod 53'1153 active pruub 238.001846313s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:13 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 67 pg[9.5( v 54'1154 (0'0,54'1154] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=12.853558540s) [2] r=-1 lpr=67 pi=[51,67)/1 crt=53'1152 lcod 53'1153 mlcod 0'0 unknown NOTIFY pruub 238.001846313s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:13 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 67 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=12.852192879s) [2] r=-1 lpr=67 pi=[51,67)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 238.000732422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:13 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 67 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=12.852116585s) [2] r=-1 lpr=67 pi=[51,67)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.000732422s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:13 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 67 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=12.851862907s) [2] r=-1 lpr=67 pi=[51,67)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 238.000518799s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:13 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 67 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=12.851848602s) [2] r=-1 lpr=67 pi=[51,67)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.000518799s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:13 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 67 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=12.851511002s) [2] r=-1 lpr=67 pi=[51,67)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 238.000732422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:13 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 67 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=12.851462364s) [2] r=-1 lpr=67 pi=[51,67)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.000732422s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:13 compute-0 ceph-mon[74207]: 5.12 scrub starts
Nov 25 09:36:13 compute-0 ceph-mon[74207]: 5.12 scrub ok
Nov 25 09:36:13 compute-0 ceph-mon[74207]: 7.7 deep-scrub starts
Nov 25 09:36:13 compute-0 ceph-mon[74207]: 7.7 deep-scrub ok
Nov 25 09:36:13 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 09:36:13 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 09:36:13 compute-0 ceph-mon[74207]: 8.0 scrub starts
Nov 25 09:36:13 compute-0 ceph-mon[74207]: 8.0 scrub ok
Nov 25 09:36:14 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 25 09:36:14 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 25 09:36:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 25 09:36:14 compute-0 ceph-mon[74207]: 3.11 deep-scrub starts
Nov 25 09:36:14 compute-0 ceph-mon[74207]: 3.11 deep-scrub ok
Nov 25 09:36:14 compute-0 ceph-mon[74207]: 10.a scrub starts
Nov 25 09:36:14 compute-0 ceph-mon[74207]: 10.a scrub ok
Nov 25 09:36:14 compute-0 ceph-mon[74207]: pgmap v72: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 319 B/s, 1 keys/s, 7 objects/s recovering
Nov 25 09:36:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 09:36:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 09:36:14 compute-0 ceph-mon[74207]: osdmap e67: 3 total, 3 up, 3 in
Nov 25 09:36:14 compute-0 ceph-mon[74207]: 8.e scrub starts
Nov 25 09:36:14 compute-0 ceph-mon[74207]: 8.e scrub ok
Nov 25 09:36:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 25 09:36:14 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 25 09:36:14 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 68 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=0 lpr=68 pi=[51,68)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:14 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 68 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=0 lpr=68 pi=[51,68)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:14 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 68 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=0 lpr=68 pi=[51,68)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:14 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 68 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=0 lpr=68 pi=[51,68)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:14 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 68 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=0 lpr=68 pi=[51,68)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:14 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 68 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=0 lpr=68 pi=[51,68)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:14 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 68 pg[9.5( v 54'1154 (0'0,54'1154] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=0 lpr=68 pi=[51,68)/1 crt=53'1152 lcod 53'1153 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:14 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 68 pg[9.5( v 54'1154 (0'0,54'1154] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=0 lpr=68 pi=[51,68)/1 crt=53'1152 lcod 53'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:14.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:36:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:36:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:36:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f5ec197c2b0>)]
Nov 25 09:36:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:36:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:36:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 25 09:36:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:36:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f5ec197c0a0>)]
Nov 25 09:36:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 25 09:36:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v75: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 321 B/s, 1 keys/s, 7 objects/s recovering
Nov 25 09:36:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Nov 25 09:36:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 09:36:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Nov 25 09:36:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 09:36:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:15.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:15 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.d deep-scrub starts
Nov 25 09:36:15 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.d deep-scrub ok
Nov 25 09:36:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 25 09:36:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 09:36:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 09:36:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 25 09:36:15 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=69) [1] r=0 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=69) [1] r=0 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[9.e( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=10.831178665s) [0] r=-1 lpr=69 pi=[51,69)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 238.001785278s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[9.e( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=10.831155777s) [0] r=-1 lpr=69 pi=[51,69)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.001785278s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=10.831230164s) [0] r=-1 lpr=69 pi=[51,69)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 238.001937866s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=10.831130981s) [0] r=-1 lpr=69 pi=[51,69)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.001937866s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[9.6( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=10.829484940s) [0] r=-1 lpr=69 pi=[51,69)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 238.000701904s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=10.829142570s) [0] r=-1 lpr=69 pi=[51,69)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 238.000503540s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[9.6( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=10.829284668s) [0] r=-1 lpr=69 pi=[51,69)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.000701904s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=10.829128265s) [0] r=-1 lpr=69 pi=[51,69)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.000503540s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:15 compute-0 ceph-mon[74207]: 3.e scrub starts
Nov 25 09:36:15 compute-0 ceph-mon[74207]: 3.e scrub ok
Nov 25 09:36:15 compute-0 ceph-mon[74207]: 10.9 scrub starts
Nov 25 09:36:15 compute-0 ceph-mon[74207]: 10.9 scrub ok
Nov 25 09:36:15 compute-0 ceph-mon[74207]: osdmap e68: 3 total, 3 up, 3 in
Nov 25 09:36:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:36:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 09:36:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 09:36:15 compute-0 ceph-mon[74207]: 11.d deep-scrub starts
Nov 25 09:36:15 compute-0 ceph-mon[74207]: 11.d deep-scrub ok
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=68/69 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[51,68)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=68/69 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[51,68)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[9.5( v 54'1154 (0'0,54'1154] local-lis/les=68/69 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[51,68)/1 crt=54'1154 lcod 53'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:15 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 69 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=68/69 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[51,68)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:16 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 25 09:36:16 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 25 09:36:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 25 09:36:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 25 09:36:16 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=68/69 n=5 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70 pruub=15.261693954s) [2] async=[2] r=-1 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 243.434539795s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=68/69 n=6 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70 pruub=15.267807961s) [2] async=[2] r=-1 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 243.440856934s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=68/69 n=5 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70 pruub=15.267871857s) [2] async=[2] r=-1 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 243.440856934s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=68/69 n=5 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70 pruub=15.267750740s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 243.440856934s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=68/69 n=6 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70 pruub=15.267748833s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 243.440856934s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.5( v 69'1160 (0'0,69'1160] local-lis/les=68/69 n=6 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70 pruub=15.267970085s) [2] async=[2] r=-1 lpr=70 pi=[51,70)/1 crt=69'1157 lcod 69'1159 mlcod 69'1159 active pruub 243.440887451s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.5( v 69'1160 (0'0,69'1160] local-lis/les=68/69 n=6 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70 pruub=15.267595291s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=69'1157 lcod 69'1159 mlcod 0'0 unknown NOTIFY pruub 243.440887451s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=68/69 n=5 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70 pruub=15.260725975s) [2] r=-1 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 243.434539795s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.e( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.e( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:16 compute-0 ceph-mon[74207]: 5.8 scrub starts
Nov 25 09:36:16 compute-0 ceph-mon[74207]: 5.8 scrub ok
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.6( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.6( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:16 compute-0 ceph-mon[74207]: 12.f scrub starts
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[6.6( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=69/70 n=2 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=69) [1] r=0 lpr=69 pi=[59,69)/1 crt=42'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:16 compute-0 ceph-mon[74207]: 12.f scrub ok
Nov 25 09:36:16 compute-0 ceph-mon[74207]: pgmap v75: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 321 B/s, 1 keys/s, 7 objects/s recovering
Nov 25 09:36:16 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 09:36:16 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 09:36:16 compute-0 ceph-mon[74207]: osdmap e69: 3 total, 3 up, 3 in
Nov 25 09:36:16 compute-0 ceph-mon[74207]: 5.b scrub starts
Nov 25 09:36:16 compute-0 ceph-mon[74207]: 5.b scrub ok
Nov 25 09:36:16 compute-0 ceph-mon[74207]: 11.6 scrub starts
Nov 25 09:36:16 compute-0 ceph-mon[74207]: 11.6 scrub ok
Nov 25 09:36:16 compute-0 ceph-mon[74207]: osdmap e70: 3 total, 3 up, 3 in
Nov 25 09:36:16 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 70 pg[6.e( v 42'42 lc 41'13 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=59/59 les/c/f=60/60/0 sis=69) [1] r=0 lpr=69 pi=[59,69)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 09:36:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:16.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 09:36:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v78: 337 pgs: 4 unknown, 4 active+remapped, 2 peering, 327 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 188 B/s, 7 objects/s recovering
Nov 25 09:36:17 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Nov 25 09:36:17 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Nov 25 09:36:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:17.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 25 09:36:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 25 09:36:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 25 09:36:17 compute-0 ceph-mon[74207]: 12.0 deep-scrub starts
Nov 25 09:36:17 compute-0 ceph-mon[74207]: 12.0 deep-scrub ok
Nov 25 09:36:17 compute-0 ceph-mon[74207]: 5.0 scrub starts
Nov 25 09:36:17 compute-0 ceph-mon[74207]: 5.0 scrub ok
Nov 25 09:36:17 compute-0 ceph-mon[74207]: 4.0 scrub starts
Nov 25 09:36:17 compute-0 ceph-mon[74207]: 4.0 scrub ok
Nov 25 09:36:17 compute-0 ceph-mon[74207]: osdmap e71: 3 total, 3 up, 3 in
Nov 25 09:36:17 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 71 pg[9.e( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] async=[0] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:17 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 71 pg[9.6( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] async=[0] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:17 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 71 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] async=[0] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:17 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 71 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=70) [0]/[1] async=[0] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.zcfgby(active, since 92s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:36:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 25 09:36:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 25 09:36:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 25 09:36:17 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 72 pg[9.e( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72 pruub=15.752518654s) [0] async=[0] r=-1 lpr=72 pi=[51,72)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 245.184448242s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:17 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 72 pg[9.e( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72 pruub=15.752472878s) [0] r=-1 lpr=72 pi=[51,72)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 245.184448242s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:17 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 72 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72 pruub=15.758201599s) [0] async=[0] r=-1 lpr=72 pi=[51,72)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 245.190307617s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:17 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 72 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72 pruub=15.758084297s) [0] r=-1 lpr=72 pi=[51,72)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 245.190307617s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:17 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 72 pg[9.6( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72 pruub=15.757751465s) [0] async=[0] r=-1 lpr=72 pi=[51,72)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 245.190261841s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:17 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 72 pg[9.6( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72 pruub=15.757672310s) [0] r=-1 lpr=72 pi=[51,72)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 245.190261841s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:17 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 72 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72 pruub=15.757151604s) [0] async=[0] r=-1 lpr=72 pi=[51,72)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 245.190338135s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:17 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 72 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/51 les/c/f=71/52/0 sis=72 pruub=15.757017136s) [0] r=-1 lpr=72 pi=[51,72)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 245.190338135s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:18 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.7 deep-scrub starts
Nov 25 09:36:18 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.7 deep-scrub ok
Nov 25 09:36:18 compute-0 ceph-mon[74207]: 10.6 scrub starts
Nov 25 09:36:18 compute-0 ceph-mon[74207]: 10.6 scrub ok
Nov 25 09:36:18 compute-0 ceph-mon[74207]: pgmap v78: 337 pgs: 4 unknown, 4 active+remapped, 2 peering, 327 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 188 B/s, 7 objects/s recovering
Nov 25 09:36:18 compute-0 ceph-mon[74207]: mgrmap e30: compute-0.zcfgby(active, since 92s), standbys: compute-2.flybft, compute-1.plffrn
Nov 25 09:36:18 compute-0 ceph-mon[74207]: osdmap e72: 3 total, 3 up, 3 in
Nov 25 09:36:18 compute-0 ceph-mon[74207]: 3.0 scrub starts
Nov 25 09:36:18 compute-0 ceph-mon[74207]: 3.0 scrub ok
Nov 25 09:36:18 compute-0 ceph-mon[74207]: 8.7 deep-scrub starts
Nov 25 09:36:18 compute-0 ceph-mon[74207]: 8.7 deep-scrub ok
Nov 25 09:36:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:18.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 25 09:36:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 25 09:36:18 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 25 09:36:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:18 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:36:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:18 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:36:18 compute-0 sudo[108946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:18 compute-0 sudo[108946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:18 compute-0 sudo[108946]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 4 unknown, 4 active+remapped, 2 peering, 327 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 203 B/s, 7 objects/s recovering
Nov 25 09:36:19 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 25 09:36:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:19.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:19 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 25 09:36:19 compute-0 ceph-mon[74207]: 7.d scrub starts
Nov 25 09:36:19 compute-0 ceph-mon[74207]: 7.d scrub ok
Nov 25 09:36:19 compute-0 ceph-mon[74207]: osdmap e73: 3 total, 3 up, 3 in
Nov 25 09:36:19 compute-0 ceph-mon[74207]: 5.4 scrub starts
Nov 25 09:36:19 compute-0 ceph-mon[74207]: 5.4 scrub ok
Nov 25 09:36:19 compute-0 ceph-mon[74207]: 4.b scrub starts
Nov 25 09:36:20 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 25 09:36:20 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 25 09:36:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:36:20] "GET /metrics HTTP/1.1" 200 48367 "" "Prometheus/2.51.0"
Nov 25 09:36:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:36:20] "GET /metrics HTTP/1.1" 200 48367 "" "Prometheus/2.51.0"
Nov 25 09:36:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:20.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:20 compute-0 ceph-mon[74207]: pgmap v82: 337 pgs: 4 unknown, 4 active+remapped, 2 peering, 327 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 203 B/s, 7 objects/s recovering
Nov 25 09:36:20 compute-0 ceph-mon[74207]: 10.0 deep-scrub starts
Nov 25 09:36:20 compute-0 ceph-mon[74207]: 10.0 deep-scrub ok
Nov 25 09:36:20 compute-0 ceph-mon[74207]: 4.b scrub ok
Nov 25 09:36:20 compute-0 ceph-mon[74207]: 5.e scrub starts
Nov 25 09:36:20 compute-0 ceph-mon[74207]: 5.e scrub ok
Nov 25 09:36:20 compute-0 ceph-mon[74207]: 11.1f scrub starts
Nov 25 09:36:20 compute-0 ceph-mon[74207]: 11.1f scrub ok
Nov 25 09:36:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v83: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.3 KiB/s wr, 81 op/s; 46 B/s, 4 objects/s recovering
Nov 25 09:36:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Nov 25 09:36:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 09:36:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Nov 25 09:36:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 09:36:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:21.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=infra.usagestats t=2025-11-25T09:36:21.183422526Z level=info msg="Usage stats are ready to report"
Nov 25 09:36:21 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 25 09:36:21 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 25 09:36:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 25 09:36:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 09:36:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 09:36:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 25 09:36:21 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 25 09:36:21 compute-0 ceph-mon[74207]: 7.0 scrub starts
Nov 25 09:36:21 compute-0 ceph-mon[74207]: 7.0 scrub ok
Nov 25 09:36:21 compute-0 ceph-mon[74207]: 3.9 scrub starts
Nov 25 09:36:21 compute-0 ceph-mon[74207]: 3.9 scrub ok
Nov 25 09:36:21 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 09:36:21 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 09:36:21 compute-0 ceph-mon[74207]: 4.10 scrub starts
Nov 25 09:36:21 compute-0 ceph-mon[74207]: 4.10 scrub ok
Nov 25 09:36:22 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 25 09:36:22 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 25 09:36:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 09:36:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:22.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 09:36:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:22 compute-0 ceph-mon[74207]: pgmap v83: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.3 KiB/s wr, 81 op/s; 46 B/s, 4 objects/s recovering
Nov 25 09:36:22 compute-0 ceph-mon[74207]: 10.d scrub starts
Nov 25 09:36:22 compute-0 ceph-mon[74207]: 10.d scrub ok
Nov 25 09:36:22 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 09:36:22 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 09:36:22 compute-0 ceph-mon[74207]: osdmap e74: 3 total, 3 up, 3 in
Nov 25 09:36:22 compute-0 ceph-mon[74207]: 5.d scrub starts
Nov 25 09:36:22 compute-0 ceph-mon[74207]: 5.d scrub ok
Nov 25 09:36:22 compute-0 ceph-mon[74207]: 11.2 scrub starts
Nov 25 09:36:22 compute-0 ceph-mon[74207]: 11.2 scrub ok
Nov 25 09:36:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v85: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.1 KiB/s wr, 67 op/s; 38 B/s, 3 objects/s recovering
Nov 25 09:36:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Nov 25 09:36:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 09:36:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Nov 25 09:36:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 09:36:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:23.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:23 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 25 09:36:23 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 25 09:36:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 25 09:36:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 09:36:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 09:36:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 25 09:36:23 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 25 09:36:23 compute-0 ceph-mon[74207]: 10.b scrub starts
Nov 25 09:36:23 compute-0 ceph-mon[74207]: 10.b scrub ok
Nov 25 09:36:23 compute-0 ceph-mon[74207]: 5.1a scrub starts
Nov 25 09:36:23 compute-0 ceph-mon[74207]: 5.1a scrub ok
Nov 25 09:36:23 compute-0 ceph-mon[74207]: pgmap v85: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.1 KiB/s wr, 67 op/s; 38 B/s, 3 objects/s recovering
Nov 25 09:36:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 09:36:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 09:36:23 compute-0 ceph-mon[74207]: 4.11 scrub starts
Nov 25 09:36:23 compute-0 ceph-mon[74207]: 4.11 scrub ok
Nov 25 09:36:23 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 75 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75 pruub=10.365956306s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 246.001861572s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:23 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 75 pg[6.8( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=75 pruub=9.358586311s) [0] r=-1 lpr=75 pi=[49,75)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 244.994644165s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:23 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 75 pg[6.8( v 42'42 (0'0,42'42] local-lis/les=49/51 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=75 pruub=9.358566284s) [0] r=-1 lpr=75 pi=[49,75)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 244.994644165s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:23 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 75 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75 pruub=10.364594460s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 246.000793457s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:23 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 75 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75 pruub=10.364583015s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.000793457s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:23 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 75 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75 pruub=10.365365028s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.001861572s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:24 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 25 09:36:24 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 25 09:36:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:24.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 25 09:36:24 compute-0 ceph-mon[74207]: 12.d scrub starts
Nov 25 09:36:24 compute-0 ceph-mon[74207]: 12.d scrub ok
Nov 25 09:36:24 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 09:36:24 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 09:36:24 compute-0 ceph-mon[74207]: osdmap e75: 3 total, 3 up, 3 in
Nov 25 09:36:24 compute-0 ceph-mon[74207]: 11.16 scrub starts
Nov 25 09:36:24 compute-0 ceph-mon[74207]: 11.16 scrub ok
Nov 25 09:36:24 compute-0 ceph-mon[74207]: 8.1d scrub starts
Nov 25 09:36:24 compute-0 ceph-mon[74207]: 8.1d scrub ok
Nov 25 09:36:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 25 09:36:24 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 25 09:36:24 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 76 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=0 lpr=76 pi=[51,76)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:24 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 76 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=0 lpr=76 pi=[51,76)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:24 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 76 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=0 lpr=76 pi=[51,76)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:24 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 76 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=0 lpr=76 pi=[51,76)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:36:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:24 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fc0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1023 B/s wr, 63 op/s; 36 B/s, 3 objects/s recovering
Nov 25 09:36:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Nov 25 09:36:25 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 09:36:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Nov 25 09:36:25 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 09:36:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:25.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:25 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 25 09:36:25 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 25 09:36:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 25 09:36:25 compute-0 ceph-mon[74207]: 7.c scrub starts
Nov 25 09:36:25 compute-0 ceph-mon[74207]: 7.c scrub ok
Nov 25 09:36:25 compute-0 ceph-mon[74207]: osdmap e76: 3 total, 3 up, 3 in
Nov 25 09:36:25 compute-0 ceph-mon[74207]: 11.17 scrub starts
Nov 25 09:36:25 compute-0 ceph-mon[74207]: 11.17 scrub ok
Nov 25 09:36:25 compute-0 ceph-mon[74207]: pgmap v88: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1023 B/s wr, 63 op/s; 36 B/s, 3 objects/s recovering
Nov 25 09:36:25 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 09:36:25 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 09:36:25 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 09:36:25 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 09:36:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 25 09:36:25 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 25 09:36:25 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 77 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77 pruub=8.513644218s) [2] r=-1 lpr=77 pi=[51,77)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 246.001235962s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:25 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 77 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77 pruub=8.513620377s) [2] r=-1 lpr=77 pi=[51,77)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.001235962s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:25 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 77 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77 pruub=8.512610435s) [2] r=-1 lpr=77 pi=[51,77)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 246.000595093s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:25 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 77 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77 pruub=8.512595177s) [2] r=-1 lpr=77 pi=[51,77)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.000595093s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:25 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 77 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:25 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 77 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[51,76)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:25 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 77 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[51,76)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:25 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fc0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:26 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb4001e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:26 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.1e deep-scrub starts
Nov 25 09:36:26 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.1e deep-scrub ok
Nov 25 09:36:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:26.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 25 09:36:26 compute-0 ceph-mon[74207]: 12.5 scrub starts
Nov 25 09:36:26 compute-0 ceph-mon[74207]: 12.5 scrub ok
Nov 25 09:36:26 compute-0 ceph-mon[74207]: 4.12 scrub starts
Nov 25 09:36:26 compute-0 ceph-mon[74207]: 4.12 scrub ok
Nov 25 09:36:26 compute-0 ceph-mon[74207]: 12.11 deep-scrub starts
Nov 25 09:36:26 compute-0 ceph-mon[74207]: 12.11 deep-scrub ok
Nov 25 09:36:26 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 09:36:26 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 09:36:26 compute-0 ceph-mon[74207]: osdmap e77: 3 total, 3 up, 3 in
Nov 25 09:36:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 25 09:36:26 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 25 09:36:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=14.997782707s) [2] async=[2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 253.493423462s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=14.997731209s) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.493423462s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=14.997035027s) [2] async=[2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 253.493438721s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=14.997002602s) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.493438721s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=77/78 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093626 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:36:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:26 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v91: 337 pgs: 2 unknown, 2 active+remapped, 1 peering, 332 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 25 09:36:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:27.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:27 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Nov 25 09:36:27 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Nov 25 09:36:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 25 09:36:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 25 09:36:27 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 25 09:36:27 compute-0 ceph-mon[74207]: 10.7 deep-scrub starts
Nov 25 09:36:27 compute-0 ceph-mon[74207]: 10.7 deep-scrub ok
Nov 25 09:36:27 compute-0 ceph-mon[74207]: 8.1e deep-scrub starts
Nov 25 09:36:27 compute-0 ceph-mon[74207]: 8.1e deep-scrub ok
Nov 25 09:36:27 compute-0 ceph-mon[74207]: 8.16 scrub starts
Nov 25 09:36:27 compute-0 ceph-mon[74207]: 8.16 scrub ok
Nov 25 09:36:27 compute-0 ceph-mon[74207]: osdmap e78: 3 total, 3 up, 3 in
Nov 25 09:36:27 compute-0 ceph-mon[74207]: pgmap v91: 337 pgs: 2 unknown, 2 active+remapped, 1 peering, 332 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 25 09:36:27 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:27 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:27 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:28 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fc0002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:28 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 25 09:36:28 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 25 09:36:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:28.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 25 09:36:28 compute-0 ceph-mon[74207]: 12.1 scrub starts
Nov 25 09:36:28 compute-0 ceph-mon[74207]: 12.1 scrub ok
Nov 25 09:36:28 compute-0 ceph-mon[74207]: 4.16 deep-scrub starts
Nov 25 09:36:28 compute-0 ceph-mon[74207]: 4.16 deep-scrub ok
Nov 25 09:36:28 compute-0 ceph-mon[74207]: 8.15 scrub starts
Nov 25 09:36:28 compute-0 ceph-mon[74207]: 8.15 scrub ok
Nov 25 09:36:28 compute-0 ceph-mon[74207]: osdmap e79: 3 total, 3 up, 3 in
Nov 25 09:36:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 25 09:36:28 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 25 09:36:28 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.957972527s) [2] async=[2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 255.504089355s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:28 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.957801819s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.504089355s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:28 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.958954811s) [2] async=[2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 255.505615234s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:28 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.958662987s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.505615234s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:28 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb40027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v94: 337 pgs: 2 unknown, 2 active+remapped, 1 peering, 332 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 25 09:36:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:29.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:29 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 25 09:36:29 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 25 09:36:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:29 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 25 09:36:29 compute-0 ceph-mon[74207]: 7.15 deep-scrub starts
Nov 25 09:36:29 compute-0 ceph-mon[74207]: 7.15 deep-scrub ok
Nov 25 09:36:29 compute-0 ceph-mon[74207]: 8.1a scrub starts
Nov 25 09:36:29 compute-0 ceph-mon[74207]: 8.1a scrub ok
Nov 25 09:36:29 compute-0 ceph-mon[74207]: 2.a scrub starts
Nov 25 09:36:29 compute-0 ceph-mon[74207]: 2.a scrub ok
Nov 25 09:36:29 compute-0 ceph-mon[74207]: osdmap e80: 3 total, 3 up, 3 in
Nov 25 09:36:29 compute-0 ceph-mon[74207]: pgmap v94: 337 pgs: 2 unknown, 2 active+remapped, 1 peering, 332 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 25 09:36:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 25 09:36:29 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 25 09:36:29 compute-0 sudo[108613]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:36:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:36:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:30 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:36:30] "GET /metrics HTTP/1.1" 200 48364 "" "Prometheus/2.51.0"
Nov 25 09:36:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:36:30] "GET /metrics HTTP/1.1" 200 48364 "" "Prometheus/2.51.0"
Nov 25 09:36:30 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 25 09:36:30 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 25 09:36:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:30.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:30 compute-0 ceph-mon[74207]: 12.1f deep-scrub starts
Nov 25 09:36:30 compute-0 ceph-mon[74207]: 12.1f deep-scrub ok
Nov 25 09:36:30 compute-0 ceph-mon[74207]: 11.18 scrub starts
Nov 25 09:36:30 compute-0 ceph-mon[74207]: 11.18 scrub ok
Nov 25 09:36:30 compute-0 ceph-mon[74207]: 11.3 scrub starts
Nov 25 09:36:30 compute-0 ceph-mon[74207]: 11.3 scrub ok
Nov 25 09:36:30 compute-0 ceph-mon[74207]: osdmap e81: 3 total, 3 up, 3 in
Nov 25 09:36:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:36:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:30 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fc0002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v96: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 37 op/s; 75 B/s, 2 objects/s recovering
Nov 25 09:36:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Nov 25 09:36:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 09:36:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Nov 25 09:36:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 09:36:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:31.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:31 compute-0 sudo[109022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:36:31 compute-0 sudo[109022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:31 compute-0 sudo[109022]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:31 compute-0 sudo[109047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:36:31 compute-0 sudo[109047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:31 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 25 09:36:31 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 25 09:36:31 compute-0 podman[109125]: 2025-11-25 09:36:31.641982674 +0000 UTC m=+0.041895119 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:36:31 compute-0 podman[109125]: 2025-11-25 09:36:31.713252367 +0000 UTC m=+0.113164793 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 25 09:36:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:31 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 25 09:36:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 09:36:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 09:36:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 25 09:36:31 compute-0 ceph-mon[74207]: 2.11 scrub starts
Nov 25 09:36:31 compute-0 ceph-mon[74207]: 2.11 scrub ok
Nov 25 09:36:31 compute-0 ceph-mon[74207]: 4.17 scrub starts
Nov 25 09:36:31 compute-0 ceph-mon[74207]: 4.17 scrub ok
Nov 25 09:36:31 compute-0 ceph-mon[74207]: 12.4 scrub starts
Nov 25 09:36:31 compute-0 ceph-mon[74207]: 12.4 scrub ok
Nov 25 09:36:31 compute-0 ceph-mon[74207]: pgmap v96: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 37 op/s; 75 B/s, 2 objects/s recovering
Nov 25 09:36:31 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 09:36:31 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 09:36:31 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 25 09:36:32 compute-0 podman[109237]: 2025-11-25 09:36:32.050039388 +0000 UTC m=+0.034032674 container exec e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:36:32 compute-0 podman[109237]: 2025-11-25 09:36:32.058081481 +0000 UTC m=+0.042074757 container exec_died e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:36:32 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.173921585s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 254.001953125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:32 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.173894882s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.001953125s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:32 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.171956062s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 254.000564575s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:32 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.171936989s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.000564575s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:32 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:32 compute-0 podman[109306]: 2025-11-25 09:36:32.248147916 +0000 UTC m=+0.034343929 container exec 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:36:32 compute-0 podman[109306]: 2025-11-25 09:36:32.269081187 +0000 UTC m=+0.055272733 container exec_died 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:36:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093632 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:36:32 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 25 09:36:32 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 25 09:36:32 compute-0 podman[109362]: 2025-11-25 09:36:32.405011449 +0000 UTC m=+0.034772957 container exec c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:36:32 compute-0 podman[109362]: 2025-11-25 09:36:32.520859469 +0000 UTC m=+0.150620957 container exec_died c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:36:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:32.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:36:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:36:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:32 compute-0 podman[109420]: 2025-11-25 09:36:32.661199509 +0000 UTC m=+0.031021537 container exec e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:36:32 compute-0 podman[109420]: 2025-11-25 09:36:32.66912878 +0000 UTC m=+0.038950809 container exec_died e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:36:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 25 09:36:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 25 09:36:32 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 25 09:36:32 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:32 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:32 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:32 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:32 compute-0 podman[109473]: 2025-11-25 09:36:32.797039921 +0000 UTC m=+0.030955192 container exec 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, distribution-scope=public, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.component=keepalived-container, vcs-type=git, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 25 09:36:32 compute-0 podman[109473]: 2025-11-25 09:36:32.809031176 +0000 UTC m=+0.042946437 container exec_died 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, distribution-scope=public, name=keepalived)
Nov 25 09:36:32 compute-0 ceph-mon[74207]: 7.17 scrub starts
Nov 25 09:36:32 compute-0 ceph-mon[74207]: 7.17 scrub ok
Nov 25 09:36:32 compute-0 ceph-mon[74207]: 5.1e scrub starts
Nov 25 09:36:32 compute-0 ceph-mon[74207]: 5.1e scrub ok
Nov 25 09:36:32 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 09:36:32 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 09:36:32 compute-0 ceph-mon[74207]: osdmap e82: 3 total, 3 up, 3 in
Nov 25 09:36:32 compute-0 ceph-mon[74207]: 2.b scrub starts
Nov 25 09:36:32 compute-0 ceph-mon[74207]: 2.b scrub ok
Nov 25 09:36:32 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:32 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:32 compute-0 ceph-mon[74207]: osdmap e83: 3 total, 3 up, 3 in
Nov 25 09:36:32 compute-0 podman[109524]: 2025-11-25 09:36:32.937768953 +0000 UTC m=+0.030524151 container exec 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:36:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:32 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:32 compute-0 podman[109524]: 2025-11-25 09:36:32.960078704 +0000 UTC m=+0.052833923 container exec_died 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:36:33 compute-0 podman[109570]: 2025-11-25 09:36:33.057046087 +0000 UTC m=+0.030493975 container exec 2fef05441996c67a6cc0b95ce6ad83d10e00a4df6c227d7d108d23ac7279dd8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:36:33 compute-0 podman[109570]: 2025-11-25 09:36:33.064084111 +0000 UTC m=+0.037531979 container exec_died 2fef05441996c67a6cc0b95ce6ad83d10e00a4df6c227d7d108d23ac7279dd8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:36:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v99: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 38 op/s; 76 B/s, 2 objects/s recovering
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 09:36:33 compute-0 sudo[109047]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:33.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:33 compute-0 sudo[109626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:36:33 compute-0 sudo[109626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:33 compute-0 sudo[109626]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:33 compute-0 sudo[109651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:36:33 compute-0 sudo[109651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:33 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 25 09:36:33 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 25 09:36:33 compute-0 sudo[109651]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 09:36:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 25 09:36:33 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 25 09:36:33 compute-0 sudo[109706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:36:33 compute-0 sudo[109706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:33 compute-0 sudo[109706]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:33 compute-0 sudo[109731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:36:33 compute-0 sudo[109731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:33 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fc0002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:33 compute-0 ceph-mon[74207]: 10.1a scrub starts
Nov 25 09:36:33 compute-0 ceph-mon[74207]: 10.1a scrub ok
Nov 25 09:36:33 compute-0 ceph-mon[74207]: 3.18 scrub starts
Nov 25 09:36:33 compute-0 ceph-mon[74207]: 3.18 scrub ok
Nov 25 09:36:33 compute-0 ceph-mon[74207]: 12.13 scrub starts
Nov 25 09:36:33 compute-0 ceph-mon[74207]: 12.13 scrub ok
Nov 25 09:36:33 compute-0 ceph-mon[74207]: pgmap v99: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 38 op/s; 76 B/s, 2 objects/s recovering
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 09:36:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 09:36:33 compute-0 ceph-mon[74207]: osdmap e84: 3 total, 3 up, 3 in
Nov 25 09:36:34 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:34 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:34 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:34 compute-0 podman[109788]: 2025-11-25 09:36:34.045860321 +0000 UTC m=+0.026905151 container create 8c2af24a51aef549ff311c7711665454402de617cf26ccb1925fd4d32efadf09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_austin, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Nov 25 09:36:34 compute-0 systemd[1]: Started libpod-conmon-8c2af24a51aef549ff311c7711665454402de617cf26ccb1925fd4d32efadf09.scope.
Nov 25 09:36:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:36:34 compute-0 podman[109788]: 2025-11-25 09:36:34.099000089 +0000 UTC m=+0.080044919 container init 8c2af24a51aef549ff311c7711665454402de617cf26ccb1925fd4d32efadf09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_austin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:36:34 compute-0 podman[109788]: 2025-11-25 09:36:34.104026127 +0000 UTC m=+0.085070967 container start 8c2af24a51aef549ff311c7711665454402de617cf26ccb1925fd4d32efadf09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 25 09:36:34 compute-0 podman[109788]: 2025-11-25 09:36:34.10577638 +0000 UTC m=+0.086821210 container attach 8c2af24a51aef549ff311c7711665454402de617cf26ccb1925fd4d32efadf09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:36:34 compute-0 jovial_austin[109801]: 167 167
Nov 25 09:36:34 compute-0 podman[109788]: 2025-11-25 09:36:34.10729622 +0000 UTC m=+0.088341050 container died 8c2af24a51aef549ff311c7711665454402de617cf26ccb1925fd4d32efadf09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:36:34 compute-0 systemd[1]: libpod-8c2af24a51aef549ff311c7711665454402de617cf26ccb1925fd4d32efadf09.scope: Deactivated successfully.
Nov 25 09:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9db83941681539c33f7212ed448fe02b4271ceaaefb0aa850f07965cf3162cfa-merged.mount: Deactivated successfully.
Nov 25 09:36:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:34 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:34 compute-0 podman[109788]: 2025-11-25 09:36:34.131721494 +0000 UTC m=+0.112766323 container remove 8c2af24a51aef549ff311c7711665454402de617cf26ccb1925fd4d32efadf09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_austin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:36:34 compute-0 podman[109788]: 2025-11-25 09:36:34.034885258 +0000 UTC m=+0.015930109 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:36:34 compute-0 systemd[1]: libpod-conmon-8c2af24a51aef549ff311c7711665454402de617cf26ccb1925fd4d32efadf09.scope: Deactivated successfully.
Nov 25 09:36:34 compute-0 podman[109824]: 2025-11-25 09:36:34.247262897 +0000 UTC m=+0.033820083 container create e242bb1250fe4d5bebda1c2253d6b2f5e4a4d8fdd2829203a33c87deeda30388 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:36:34 compute-0 systemd[1]: Started libpod-conmon-e242bb1250fe4d5bebda1c2253d6b2f5e4a4d8fdd2829203a33c87deeda30388.scope.
Nov 25 09:36:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f743e5b105fd9e95f890e282f266ee5d3952d5b7de9272ee5616b7d135af04a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f743e5b105fd9e95f890e282f266ee5d3952d5b7de9272ee5616b7d135af04a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f743e5b105fd9e95f890e282f266ee5d3952d5b7de9272ee5616b7d135af04a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f743e5b105fd9e95f890e282f266ee5d3952d5b7de9272ee5616b7d135af04a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f743e5b105fd9e95f890e282f266ee5d3952d5b7de9272ee5616b7d135af04a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:34 compute-0 podman[109824]: 2025-11-25 09:36:34.307314702 +0000 UTC m=+0.093871878 container init e242bb1250fe4d5bebda1c2253d6b2f5e4a4d8fdd2829203a33c87deeda30388 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:36:34 compute-0 podman[109824]: 2025-11-25 09:36:34.313931233 +0000 UTC m=+0.100488409 container start e242bb1250fe4d5bebda1c2253d6b2f5e4a4d8fdd2829203a33c87deeda30388 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hellman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Nov 25 09:36:34 compute-0 podman[109824]: 2025-11-25 09:36:34.31512511 +0000 UTC m=+0.101682286 container attach e242bb1250fe4d5bebda1c2253d6b2f5e4a4d8fdd2829203a33c87deeda30388 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:36:34 compute-0 podman[109824]: 2025-11-25 09:36:34.235098286 +0000 UTC m=+0.021655482 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:36:34 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.13 deep-scrub starts
Nov 25 09:36:34 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.13 deep-scrub ok
Nov 25 09:36:34 compute-0 reverent_hellman[109837]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:36:34 compute-0 reverent_hellman[109837]: --> All data devices are unavailable
Nov 25 09:36:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:34.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:34 compute-0 systemd[1]: libpod-e242bb1250fe4d5bebda1c2253d6b2f5e4a4d8fdd2829203a33c87deeda30388.scope: Deactivated successfully.
Nov 25 09:36:34 compute-0 conmon[109837]: conmon e242bb1250fe4d5bebda <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e242bb1250fe4d5bebda1c2253d6b2f5e4a4d8fdd2829203a33c87deeda30388.scope/container/memory.events
Nov 25 09:36:34 compute-0 podman[109824]: 2025-11-25 09:36:34.584030812 +0000 UTC m=+0.370587988 container died e242bb1250fe4d5bebda1c2253d6b2f5e4a4d8fdd2829203a33c87deeda30388 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hellman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f743e5b105fd9e95f890e282f266ee5d3952d5b7de9272ee5616b7d135af04a1-merged.mount: Deactivated successfully.
Nov 25 09:36:34 compute-0 podman[109824]: 2025-11-25 09:36:34.607976963 +0000 UTC m=+0.394534129 container remove e242bb1250fe4d5bebda1c2253d6b2f5e4a4d8fdd2829203a33c87deeda30388 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 25 09:36:34 compute-0 systemd[1]: libpod-conmon-e242bb1250fe4d5bebda1c2253d6b2f5e4a4d8fdd2829203a33c87deeda30388.scope: Deactivated successfully.
Nov 25 09:36:34 compute-0 sudo[109731]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:34 compute-0 sudo[109861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:36:34 compute-0 sudo[109861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:34 compute-0 sudo[109861]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:34 compute-0 sudo[109886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:36:34 compute-0 sudo[109886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 25 09:36:34 compute-0 ceph-mon[74207]: 2.14 scrub starts
Nov 25 09:36:34 compute-0 ceph-mon[74207]: 2.14 scrub ok
Nov 25 09:36:34 compute-0 ceph-mon[74207]: 3.19 scrub starts
Nov 25 09:36:34 compute-0 ceph-mon[74207]: 3.19 scrub ok
Nov 25 09:36:34 compute-0 ceph-mon[74207]: 8.2 scrub starts
Nov 25 09:36:34 compute-0 ceph-mon[74207]: 8.2 scrub ok
Nov 25 09:36:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 25 09:36:34 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 25 09:36:34 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.177088737s) [0] async=[0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 261.761322021s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:34 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176831245s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761322021s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:34 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176769257s) [0] async=[0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 261.761627197s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:34 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176725388s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761627197s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:34 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:34 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:35 compute-0 podman[109943]: 2025-11-25 09:36:35.002266832 +0000 UTC m=+0.025675948 container create 8489d59d3e74b73a545cf2d8b7238212b6b3ee3ff702ce93920bd3ba5b31ae8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_rhodes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:36:35 compute-0 systemd[1]: Started libpod-conmon-8489d59d3e74b73a545cf2d8b7238212b6b3ee3ff702ce93920bd3ba5b31ae8f.scope.
Nov 25 09:36:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:36:35 compute-0 podman[109943]: 2025-11-25 09:36:35.039686607 +0000 UTC m=+0.063095723 container init 8489d59d3e74b73a545cf2d8b7238212b6b3ee3ff702ce93920bd3ba5b31ae8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:36:35 compute-0 podman[109943]: 2025-11-25 09:36:35.044158302 +0000 UTC m=+0.067567407 container start 8489d59d3e74b73a545cf2d8b7238212b6b3ee3ff702ce93920bd3ba5b31ae8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:36:35 compute-0 podman[109943]: 2025-11-25 09:36:35.045373418 +0000 UTC m=+0.068782545 container attach 8489d59d3e74b73a545cf2d8b7238212b6b3ee3ff702ce93920bd3ba5b31ae8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:36:35 compute-0 trusting_rhodes[109956]: 167 167
Nov 25 09:36:35 compute-0 systemd[1]: libpod-8489d59d3e74b73a545cf2d8b7238212b6b3ee3ff702ce93920bd3ba5b31ae8f.scope: Deactivated successfully.
Nov 25 09:36:35 compute-0 conmon[109956]: conmon 8489d59d3e74b73a545c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8489d59d3e74b73a545cf2d8b7238212b6b3ee3ff702ce93920bd3ba5b31ae8f.scope/container/memory.events
Nov 25 09:36:35 compute-0 podman[109943]: 2025-11-25 09:36:35.0476429 +0000 UTC m=+0.071052005 container died 8489d59d3e74b73a545cf2d8b7238212b6b3ee3ff702ce93920bd3ba5b31ae8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_rhodes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 25 09:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ec332e44e1094545c3eb9257c0b1395b73d1154d280ef5140386ec821cd9518-merged.mount: Deactivated successfully.
Nov 25 09:36:35 compute-0 podman[109943]: 2025-11-25 09:36:35.066977283 +0000 UTC m=+0.090386388 container remove 8489d59d3e74b73a545cf2d8b7238212b6b3ee3ff702ce93920bd3ba5b31ae8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_rhodes, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:36:35 compute-0 podman[109943]: 2025-11-25 09:36:34.991720946 +0000 UTC m=+0.015130072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:36:35 compute-0 systemd[1]: libpod-conmon-8489d59d3e74b73a545cf2d8b7238212b6b3ee3ff702ce93920bd3ba5b31ae8f.scope: Deactivated successfully.
Nov 25 09:36:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v102: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:36:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Nov 25 09:36:35 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 09:36:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Nov 25 09:36:35 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 09:36:35 compute-0 podman[109978]: 2025-11-25 09:36:35.178961683 +0000 UTC m=+0.029502819 container create aede2db7449b2712d23dcad0801f79c688a308851f6e7b0e0d7d61e3d0f427c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_antonelli, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:36:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:35.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:35 compute-0 systemd[1]: Started libpod-conmon-aede2db7449b2712d23dcad0801f79c688a308851f6e7b0e0d7d61e3d0f427c9.scope.
Nov 25 09:36:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f050f4e9802b4faa0962b53f77dc332f2e4c7ede570fb717848429302c8b52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f050f4e9802b4faa0962b53f77dc332f2e4c7ede570fb717848429302c8b52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f050f4e9802b4faa0962b53f77dc332f2e4c7ede570fb717848429302c8b52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f050f4e9802b4faa0962b53f77dc332f2e4c7ede570fb717848429302c8b52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:35 compute-0 podman[109978]: 2025-11-25 09:36:35.233921816 +0000 UTC m=+0.084462952 container init aede2db7449b2712d23dcad0801f79c688a308851f6e7b0e0d7d61e3d0f427c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_antonelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:36:35 compute-0 podman[109978]: 2025-11-25 09:36:35.239213193 +0000 UTC m=+0.089754329 container start aede2db7449b2712d23dcad0801f79c688a308851f6e7b0e0d7d61e3d0f427c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:36:35 compute-0 podman[109978]: 2025-11-25 09:36:35.240553275 +0000 UTC m=+0.091094411 container attach aede2db7449b2712d23dcad0801f79c688a308851f6e7b0e0d7d61e3d0f427c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_antonelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:36:35 compute-0 podman[109978]: 2025-11-25 09:36:35.167254873 +0000 UTC m=+0.017796019 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:36:35 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 25 09:36:35 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 25 09:36:35 compute-0 sad_antonelli[109991]: {
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:     "1": [
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:         {
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             "devices": [
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "/dev/loop3"
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             ],
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             "lv_name": "ceph_lv0",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             "lv_size": "21470642176",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             "name": "ceph_lv0",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             "tags": {
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.cluster_name": "ceph",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.crush_device_class": "",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.encrypted": "0",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.osd_id": "1",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.type": "block",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.vdo": "0",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:                 "ceph.with_tpm": "0"
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             },
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             "type": "block",
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:             "vg_name": "ceph_vg0"
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:         }
Nov 25 09:36:35 compute-0 sad_antonelli[109991]:     ]
Nov 25 09:36:35 compute-0 sad_antonelli[109991]: }
Nov 25 09:36:35 compute-0 systemd[1]: libpod-aede2db7449b2712d23dcad0801f79c688a308851f6e7b0e0d7d61e3d0f427c9.scope: Deactivated successfully.
Nov 25 09:36:35 compute-0 podman[109978]: 2025-11-25 09:36:35.476697818 +0000 UTC m=+0.327238954 container died aede2db7449b2712d23dcad0801f79c688a308851f6e7b0e0d7d61e3d0f427c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_antonelli, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 25 09:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9f050f4e9802b4faa0962b53f77dc332f2e4c7ede570fb717848429302c8b52-merged.mount: Deactivated successfully.
Nov 25 09:36:35 compute-0 podman[109978]: 2025-11-25 09:36:35.498165065 +0000 UTC m=+0.348706201 container remove aede2db7449b2712d23dcad0801f79c688a308851f6e7b0e0d7d61e3d0f427c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_antonelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 25 09:36:35 compute-0 systemd[1]: libpod-conmon-aede2db7449b2712d23dcad0801f79c688a308851f6e7b0e0d7d61e3d0f427c9.scope: Deactivated successfully.
Nov 25 09:36:35 compute-0 sudo[109886]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:35 compute-0 sudo[110010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:36:35 compute-0 sudo[110010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:35 compute-0 sudo[110010]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:35 compute-0 sudo[110035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:36:35 compute-0 sudo[110035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:35 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb00022b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 25 09:36:35 compute-0 ceph-mon[74207]: 10.1c scrub starts
Nov 25 09:36:35 compute-0 ceph-mon[74207]: 10.1c scrub ok
Nov 25 09:36:35 compute-0 ceph-mon[74207]: 10.13 deep-scrub starts
Nov 25 09:36:35 compute-0 ceph-mon[74207]: 10.13 deep-scrub ok
Nov 25 09:36:35 compute-0 ceph-mon[74207]: 8.3 scrub starts
Nov 25 09:36:35 compute-0 ceph-mon[74207]: 8.3 scrub ok
Nov 25 09:36:35 compute-0 ceph-mon[74207]: osdmap e85: 3 total, 3 up, 3 in
Nov 25 09:36:35 compute-0 ceph-mon[74207]: pgmap v102: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:36:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 09:36:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 09:36:35 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 09:36:35 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 09:36:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 25 09:36:35 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 25 09:36:35 compute-0 podman[110091]: 2025-11-25 09:36:35.898993146 +0000 UTC m=+0.027014587 container create 68b6de34eaf91deec5230cc6ae33591322890665b02feeecdaea3324158ae6f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hertz, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:36:35 compute-0 systemd[1]: Started libpod-conmon-68b6de34eaf91deec5230cc6ae33591322890665b02feeecdaea3324158ae6f5.scope.
Nov 25 09:36:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:36:35 compute-0 podman[110091]: 2025-11-25 09:36:35.95122781 +0000 UTC m=+0.079249261 container init 68b6de34eaf91deec5230cc6ae33591322890665b02feeecdaea3324158ae6f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 09:36:35 compute-0 podman[110091]: 2025-11-25 09:36:35.955429848 +0000 UTC m=+0.083451290 container start 68b6de34eaf91deec5230cc6ae33591322890665b02feeecdaea3324158ae6f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:36:35 compute-0 peaceful_hertz[110106]: 167 167
Nov 25 09:36:35 compute-0 systemd[1]: libpod-68b6de34eaf91deec5230cc6ae33591322890665b02feeecdaea3324158ae6f5.scope: Deactivated successfully.
Nov 25 09:36:35 compute-0 podman[110091]: 2025-11-25 09:36:35.958804068 +0000 UTC m=+0.086825538 container attach 68b6de34eaf91deec5230cc6ae33591322890665b02feeecdaea3324158ae6f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hertz, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:36:35 compute-0 podman[110091]: 2025-11-25 09:36:35.959174054 +0000 UTC m=+0.087195495 container died 68b6de34eaf91deec5230cc6ae33591322890665b02feeecdaea3324158ae6f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-162473f4950b222b3dc1dcbf22e1b7e4967f8b596d77c197028a2d3490bfa697-merged.mount: Deactivated successfully.
Nov 25 09:36:35 compute-0 podman[110091]: 2025-11-25 09:36:35.976274483 +0000 UTC m=+0.104295924 container remove 68b6de34eaf91deec5230cc6ae33591322890665b02feeecdaea3324158ae6f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:36:35 compute-0 podman[110091]: 2025-11-25 09:36:35.887210033 +0000 UTC m=+0.015231484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:36:35 compute-0 systemd[1]: libpod-conmon-68b6de34eaf91deec5230cc6ae33591322890665b02feeecdaea3324158ae6f5.scope: Deactivated successfully.
Nov 25 09:36:36 compute-0 podman[110128]: 2025-11-25 09:36:36.08430361 +0000 UTC m=+0.026144549 container create 5b8a2970f92528c82ed05b9133466328261c419a7197a2e59dcfa8e014c36002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:36:36 compute-0 systemd[1]: Started libpod-conmon-5b8a2970f92528c82ed05b9133466328261c419a7197a2e59dcfa8e014c36002.scope.
Nov 25 09:36:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b939ab05770bdff7bb3b473ad03d72aa65452d16c273cc83649203d4ef98fdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b939ab05770bdff7bb3b473ad03d72aa65452d16c273cc83649203d4ef98fdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b939ab05770bdff7bb3b473ad03d72aa65452d16c273cc83649203d4ef98fdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b939ab05770bdff7bb3b473ad03d72aa65452d16c273cc83649203d4ef98fdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:36:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:36 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fc0002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:36 compute-0 podman[110128]: 2025-11-25 09:36:36.141357523 +0000 UTC m=+0.083198482 container init 5b8a2970f92528c82ed05b9133466328261c419a7197a2e59dcfa8e014c36002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:36:36 compute-0 podman[110128]: 2025-11-25 09:36:36.146225013 +0000 UTC m=+0.088065952 container start 5b8a2970f92528c82ed05b9133466328261c419a7197a2e59dcfa8e014c36002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:36:36 compute-0 podman[110128]: 2025-11-25 09:36:36.147355421 +0000 UTC m=+0.089196390 container attach 5b8a2970f92528c82ed05b9133466328261c419a7197a2e59dcfa8e014c36002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:36:36 compute-0 podman[110128]: 2025-11-25 09:36:36.074322508 +0000 UTC m=+0.016163467 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:36:36 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 25 09:36:36 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 25 09:36:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:36.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:36 compute-0 lvm[110217]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:36:36 compute-0 lvm[110217]: VG ceph_vg0 finished
Nov 25 09:36:36 compute-0 xenodochial_allen[110142]: {}
Nov 25 09:36:36 compute-0 systemd[1]: libpod-5b8a2970f92528c82ed05b9133466328261c419a7197a2e59dcfa8e014c36002.scope: Deactivated successfully.
Nov 25 09:36:36 compute-0 conmon[110142]: conmon 5b8a2970f92528c82ed0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b8a2970f92528c82ed05b9133466328261c419a7197a2e59dcfa8e014c36002.scope/container/memory.events
Nov 25 09:36:36 compute-0 podman[110220]: 2025-11-25 09:36:36.659490047 +0000 UTC m=+0.017520500 container died 5b8a2970f92528c82ed05b9133466328261c419a7197a2e59dcfa8e014c36002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 09:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b939ab05770bdff7bb3b473ad03d72aa65452d16c273cc83649203d4ef98fdd-merged.mount: Deactivated successfully.
Nov 25 09:36:36 compute-0 podman[110220]: 2025-11-25 09:36:36.680316047 +0000 UTC m=+0.038346480 container remove 5b8a2970f92528c82ed05b9133466328261c419a7197a2e59dcfa8e014c36002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:36:36 compute-0 systemd[1]: libpod-conmon-5b8a2970f92528c82ed05b9133466328261c419a7197a2e59dcfa8e014c36002.scope: Deactivated successfully.
Nov 25 09:36:36 compute-0 sudo[110035]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:36:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:36:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:36 compute-0 sudo[110232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:36:36 compute-0 sudo[110232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:36 compute-0 sudo[110232]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:36 compute-0 ceph-mon[74207]: 10.1d scrub starts
Nov 25 09:36:36 compute-0 ceph-mon[74207]: 10.1d scrub ok
Nov 25 09:36:36 compute-0 ceph-mon[74207]: 7.1e scrub starts
Nov 25 09:36:36 compute-0 ceph-mon[74207]: 7.1e scrub ok
Nov 25 09:36:36 compute-0 ceph-mon[74207]: 2.1c deep-scrub starts
Nov 25 09:36:36 compute-0 ceph-mon[74207]: 2.1c deep-scrub ok
Nov 25 09:36:36 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 09:36:36 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 09:36:36 compute-0 ceph-mon[74207]: osdmap e86: 3 total, 3 up, 3 in
Nov 25 09:36:36 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:36 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:36:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:36 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v104: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 49 B/s, 3 objects/s recovering
Nov 25 09:36:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Nov 25 09:36:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 09:36:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Nov 25 09:36:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 09:36:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:37.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:37 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 25 09:36:37 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 25 09:36:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:37 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc004230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 25 09:36:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 09:36:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 09:36:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 25 09:36:37 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 25 09:36:37 compute-0 ceph-mon[74207]: 12.1b scrub starts
Nov 25 09:36:37 compute-0 ceph-mon[74207]: 12.1b scrub ok
Nov 25 09:36:37 compute-0 ceph-mon[74207]: 5.1d scrub starts
Nov 25 09:36:37 compute-0 ceph-mon[74207]: 5.1d scrub ok
Nov 25 09:36:37 compute-0 ceph-mon[74207]: 10.1 scrub starts
Nov 25 09:36:37 compute-0 ceph-mon[74207]: 10.1 scrub ok
Nov 25 09:36:37 compute-0 ceph-mon[74207]: pgmap v104: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 49 B/s, 3 objects/s recovering
Nov 25 09:36:37 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 09:36:37 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 09:36:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:38 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc004230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:38 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Nov 25 09:36:38 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Nov 25 09:36:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:38.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 25 09:36:38 compute-0 ceph-mon[74207]: 2.16 scrub starts
Nov 25 09:36:38 compute-0 ceph-mon[74207]: 2.16 scrub ok
Nov 25 09:36:38 compute-0 ceph-mon[74207]: 7.18 scrub starts
Nov 25 09:36:38 compute-0 ceph-mon[74207]: 7.18 scrub ok
Nov 25 09:36:38 compute-0 ceph-mon[74207]: 8.9 scrub starts
Nov 25 09:36:38 compute-0 ceph-mon[74207]: 8.9 scrub ok
Nov 25 09:36:38 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 09:36:38 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 09:36:38 compute-0 ceph-mon[74207]: osdmap e87: 3 total, 3 up, 3 in
Nov 25 09:36:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 25 09:36:38 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 25 09:36:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:38 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb0002dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:38 compute-0 sudo[110259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:38 compute-0 sudo[110259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:38 compute-0 sudo[110259]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v107: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 3 objects/s recovering
Nov 25 09:36:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Nov 25 09:36:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Nov 25 09:36:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 09:36:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:39.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:39 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 25 09:36:39 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 25 09:36:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:39 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb4004420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 25 09:36:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 09:36:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 09:36:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 25 09:36:39 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 25 09:36:39 compute-0 ceph-mon[74207]: 7.12 scrub starts
Nov 25 09:36:39 compute-0 ceph-mon[74207]: 7.12 scrub ok
Nov 25 09:36:39 compute-0 ceph-mon[74207]: 12.12 scrub starts
Nov 25 09:36:39 compute-0 ceph-mon[74207]: 12.12 scrub ok
Nov 25 09:36:39 compute-0 ceph-mon[74207]: 11.a scrub starts
Nov 25 09:36:39 compute-0 ceph-mon[74207]: 11.a scrub ok
Nov 25 09:36:39 compute-0 ceph-mon[74207]: osdmap e88: 3 total, 3 up, 3 in
Nov 25 09:36:39 compute-0 ceph-mon[74207]: pgmap v107: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 3 objects/s recovering
Nov 25 09:36:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 09:36:39 compute-0 ceph-mon[74207]: 2.e scrub starts
Nov 25 09:36:39 compute-0 ceph-mon[74207]: 2.e scrub ok
Nov 25 09:36:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:40 : epoch 6925788c : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:36:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:40 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fc0009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:36:40] "GET /metrics HTTP/1.1" 200 48364 "" "Prometheus/2.51.0"
Nov 25 09:36:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:36:40] "GET /metrics HTTP/1.1" 200 48364 "" "Prometheus/2.51.0"
Nov 25 09:36:40 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.b deep-scrub starts
Nov 25 09:36:40 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.b deep-scrub ok
Nov 25 09:36:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:40.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 25 09:36:40 compute-0 ceph-mon[74207]: 2.17 deep-scrub starts
Nov 25 09:36:40 compute-0 ceph-mon[74207]: 2.17 deep-scrub ok
Nov 25 09:36:40 compute-0 ceph-mon[74207]: 4.2 scrub starts
Nov 25 09:36:40 compute-0 ceph-mon[74207]: 4.2 scrub ok
Nov 25 09:36:40 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 09:36:40 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 09:36:40 compute-0 ceph-mon[74207]: osdmap e89: 3 total, 3 up, 3 in
Nov 25 09:36:40 compute-0 ceph-mon[74207]: 7.b deep-scrub starts
Nov 25 09:36:40 compute-0 ceph-mon[74207]: 7.b deep-scrub ok
Nov 25 09:36:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 25 09:36:40 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 25 09:36:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:40 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc004230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v110: 337 pgs: 2 peering, 335 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s; 82 B/s, 3 objects/s recovering
Nov 25 09:36:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:41.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:41 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 25 09:36:41 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 25 09:36:41 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 89 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89 pruub=15.017866135s) [0] r=-1 lpr=89 pi=[69,89)/1 crt=42'42 mlcod 42'42 active pruub 268.177459717s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:41 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 90 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89 pruub=15.017827034s) [0] r=-1 lpr=89 pi=[69,89)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 268.177459717s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:41 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb0002dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 25 09:36:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 25 09:36:41 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 25 09:36:41 compute-0 ceph-mon[74207]: 10.1f scrub starts
Nov 25 09:36:41 compute-0 ceph-mon[74207]: 10.1f scrub ok
Nov 25 09:36:41 compute-0 ceph-mon[74207]: 7.5 deep-scrub starts
Nov 25 09:36:41 compute-0 ceph-mon[74207]: 7.5 deep-scrub ok
Nov 25 09:36:41 compute-0 ceph-mon[74207]: osdmap e90: 3 total, 3 up, 3 in
Nov 25 09:36:41 compute-0 ceph-mon[74207]: pgmap v110: 337 pgs: 2 peering, 335 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s; 82 B/s, 3 objects/s recovering
Nov 25 09:36:41 compute-0 ceph-mon[74207]: 5.6 scrub starts
Nov 25 09:36:41 compute-0 ceph-mon[74207]: 5.6 scrub ok
Nov 25 09:36:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:42 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb0002dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:42 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 25 09:36:42 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 25 09:36:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:42.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:42 compute-0 ceph-mon[74207]: 12.16 scrub starts
Nov 25 09:36:42 compute-0 ceph-mon[74207]: 12.16 scrub ok
Nov 25 09:36:42 compute-0 ceph-mon[74207]: 8.a scrub starts
Nov 25 09:36:42 compute-0 ceph-mon[74207]: 8.a scrub ok
Nov 25 09:36:42 compute-0 ceph-mon[74207]: osdmap e91: 3 total, 3 up, 3 in
Nov 25 09:36:42 compute-0 ceph-mon[74207]: 7.4 scrub starts
Nov 25 09:36:42 compute-0 ceph-mon[74207]: 7.4 scrub ok
Nov 25 09:36:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:42 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fc000a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:43 : epoch 6925788c : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:36:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:43 : epoch 6925788c : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:36:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v112: 337 pgs: 2 peering, 335 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 962 B/s rd, 962 B/s wr, 1 op/s; 77 B/s, 3 objects/s recovering
Nov 25 09:36:43 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 25 09:36:43 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 25 09:36:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:43.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:43 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fc000a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:43 compute-0 ceph-mon[74207]: 5.18 deep-scrub starts
Nov 25 09:36:43 compute-0 ceph-mon[74207]: 5.18 deep-scrub ok
Nov 25 09:36:43 compute-0 ceph-mon[74207]: 11.8 deep-scrub starts
Nov 25 09:36:43 compute-0 ceph-mon[74207]: 11.8 deep-scrub ok
Nov 25 09:36:43 compute-0 ceph-mon[74207]: pgmap v112: 337 pgs: 2 peering, 335 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 962 B/s rd, 962 B/s wr, 1 op/s; 77 B/s, 3 objects/s recovering
Nov 25 09:36:43 compute-0 ceph-mon[74207]: 3.1 scrub starts
Nov 25 09:36:43 compute-0 ceph-mon[74207]: 3.1 scrub ok
Nov 25 09:36:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:44 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb4004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:44 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.e deep-scrub starts
Nov 25 09:36:44 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.e deep-scrub ok
Nov 25 09:36:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:36:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:44.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:36:44
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Some PGs (0.005935) are inactive; try again later
Nov 25 09:36:44 compute-0 ceph-mon[74207]: 4.18 scrub starts
Nov 25 09:36:44 compute-0 ceph-mon[74207]: 4.18 scrub ok
Nov 25 09:36:44 compute-0 ceph-mon[74207]: 8.d scrub starts
Nov 25 09:36:44 compute-0 ceph-mon[74207]: 8.d scrub ok
Nov 25 09:36:44 compute-0 ceph-mon[74207]: 8.17 scrub starts
Nov 25 09:36:44 compute-0 ceph-mon[74207]: 8.17 scrub ok
Nov 25 09:36:44 compute-0 ceph-mon[74207]: 12.e deep-scrub starts
Nov 25 09:36:44 compute-0 ceph-mon[74207]: 12.e deep-scrub ok
Nov 25 09:36:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:36:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:36:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:44 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb0002dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:36:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:36:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:36:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:36:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:36:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:36:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:36:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v113: 337 pgs: 2 peering, 335 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s; 54 B/s, 2 objects/s recovering
Nov 25 09:36:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:36:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:45.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:36:45 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Nov 25 09:36:45 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Nov 25 09:36:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:45 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fc000a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:45 compute-0 ceph-mon[74207]: 12.9 deep-scrub starts
Nov 25 09:36:45 compute-0 ceph-mon[74207]: 12.9 deep-scrub ok
Nov 25 09:36:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:36:45 compute-0 ceph-mon[74207]: pgmap v113: 337 pgs: 2 peering, 335 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s; 54 B/s, 2 objects/s recovering
Nov 25 09:36:45 compute-0 ceph-mon[74207]: 8.12 scrub starts
Nov 25 09:36:45 compute-0 ceph-mon[74207]: 8.12 scrub ok
Nov 25 09:36:45 compute-0 ceph-mon[74207]: 10.8 deep-scrub starts
Nov 25 09:36:45 compute-0 ceph-mon[74207]: 10.8 deep-scrub ok
Nov 25 09:36:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:46 : epoch 6925788c : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:36:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:46 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fc000a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:46 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 25 09:36:46 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 25 09:36:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:46.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:46 compute-0 sudo[110417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svzrybnlpxhutewwhrupxmpbfdeflcgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063406.4491544-369-281213919807628/AnsiballZ_command.py'
Nov 25 09:36:46 compute-0 sudo[110417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:46 compute-0 python3.9[110419]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:36:46 compute-0 ceph-mon[74207]: 10.f scrub starts
Nov 25 09:36:46 compute-0 ceph-mon[74207]: 10.f scrub ok
Nov 25 09:36:46 compute-0 ceph-mon[74207]: 5.1b scrub starts
Nov 25 09:36:46 compute-0 ceph-mon[74207]: 5.1b scrub ok
Nov 25 09:36:46 compute-0 ceph-mon[74207]: 7.6 scrub starts
Nov 25 09:36:46 compute-0 ceph-mon[74207]: 7.6 scrub ok
Nov 25 09:36:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:46 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb4005130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v114: 337 pgs: 337 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 45 B/s, 1 objects/s recovering
Nov 25 09:36:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Nov 25 09:36:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 09:36:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Nov 25 09:36:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 09:36:47 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 25 09:36:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:36:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:47.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:36:47 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 25 09:36:47 compute-0 sudo[110417]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:47 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb0004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 25 09:36:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 09:36:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 09:36:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 25 09:36:47 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 25 09:36:47 compute-0 ceph-mon[74207]: 11.e scrub starts
Nov 25 09:36:47 compute-0 ceph-mon[74207]: 11.e scrub ok
Nov 25 09:36:47 compute-0 ceph-mon[74207]: pgmap v114: 337 pgs: 337 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 45 B/s, 1 objects/s recovering
Nov 25 09:36:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 09:36:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 09:36:47 compute-0 ceph-mon[74207]: 11.12 scrub starts
Nov 25 09:36:47 compute-0 ceph-mon[74207]: 11.12 scrub ok
Nov 25 09:36:47 compute-0 ceph-mon[74207]: 5.5 scrub starts
Nov 25 09:36:47 compute-0 ceph-mon[74207]: 5.5 scrub ok
Nov 25 09:36:47 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:48 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fc000a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:48 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.c deep-scrub starts
Nov 25 09:36:48 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.c deep-scrub ok
Nov 25 09:36:48 compute-0 sudo[110706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dreuimmbpquzzlwwzmirbumvxinxotia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063407.7388303-393-165667484613891/AnsiballZ_selinux.py'
Nov 25 09:36:48 compute-0 sudo[110706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:48 compute-0 python3.9[110708]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 09:36:48 compute-0 sudo[110706]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:36:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:48.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:36:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 25 09:36:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 25 09:36:48 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 25 09:36:48 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 lc 41'1 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:48 compute-0 ceph-mon[74207]: 2.d scrub starts
Nov 25 09:36:48 compute-0 ceph-mon[74207]: 2.d scrub ok
Nov 25 09:36:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 09:36:48 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 09:36:48 compute-0 ceph-mon[74207]: osdmap e92: 3 total, 3 up, 3 in
Nov 25 09:36:48 compute-0 ceph-mon[74207]: 12.c deep-scrub starts
Nov 25 09:36:48 compute-0 ceph-mon[74207]: 5.1c deep-scrub starts
Nov 25 09:36:48 compute-0 ceph-mon[74207]: 12.c deep-scrub ok
Nov 25 09:36:48 compute-0 ceph-mon[74207]: 5.1c deep-scrub ok
Nov 25 09:36:48 compute-0 ceph-mon[74207]: osdmap e93: 3 total, 3 up, 3 in
Nov 25 09:36:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:48 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:49 compute-0 sudo[110858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfhsquvfzqqahjficunrlylveyqztnrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063408.8290586-426-238567880537342/AnsiballZ_command.py'
Nov 25 09:36:49 compute-0 sudo[110858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v117: 337 pgs: 337 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 09:36:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Nov 25 09:36:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 25 09:36:49 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 25 09:36:49 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 25 09:36:49 compute-0 python3.9[110860]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 09:36:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:49.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:49 compute-0 sudo[110858]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:49 compute-0 sudo[111010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqvlfjuvopblxazevjqfsrzsvhipibau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063409.3726857-450-259394592903119/AnsiballZ_file.py'
Nov 25 09:36:49 compute-0 sudo[111010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:49 compute-0 python3.9[111012]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:36:49 compute-0 sudo[111010]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:49 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 25 09:36:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 25 09:36:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 25 09:36:49 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 25 09:36:49 compute-0 ceph-mon[74207]: 12.3 scrub starts
Nov 25 09:36:49 compute-0 ceph-mon[74207]: 12.3 scrub ok
Nov 25 09:36:49 compute-0 ceph-mon[74207]: pgmap v117: 337 pgs: 337 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 09:36:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 25 09:36:49 compute-0 ceph-mon[74207]: 11.14 scrub starts
Nov 25 09:36:49 compute-0 ceph-mon[74207]: 11.14 scrub ok
Nov 25 09:36:49 compute-0 ceph-mon[74207]: 3.6 scrub starts
Nov 25 09:36:49 compute-0 ceph-mon[74207]: 3.6 scrub ok
Nov 25 09:36:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 25 09:36:49 compute-0 ceph-mon[74207]: osdmap e94: 3 total, 3 up, 3 in
Nov 25 09:36:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94 pruub=8.164560318s) [0] r=-1 lpr=94 pi=[51,94)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 270.002288818s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94 pruub=8.164503098s) [0] r=-1 lpr=94 pi=[51,94)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.002288818s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:50 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:50 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 25 09:36:50 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 25 09:36:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:36:50] "GET /metrics HTTP/1.1" 200 48364 "" "Prometheus/2.51.0"
Nov 25 09:36:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:36:50] "GET /metrics HTTP/1.1" 200 48364 "" "Prometheus/2.51.0"
Nov 25 09:36:50 compute-0 sudo[111164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgbadvamvmtfhuivpupmntlslzccevzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063409.9036465-474-130338693612517/AnsiballZ_mount.py'
Nov 25 09:36:50 compute-0 sudo[111164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:50 compute-0 python3.9[111166]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 09:36:50 compute-0 sudo[111164]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:36:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:50.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:36:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 25 09:36:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 25 09:36:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:50 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:50 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 25 09:36:50 compute-0 ceph-mon[74207]: 2.c scrub starts
Nov 25 09:36:50 compute-0 ceph-mon[74207]: 2.c scrub ok
Nov 25 09:36:50 compute-0 ceph-mon[74207]: 3.1c scrub starts
Nov 25 09:36:50 compute-0 ceph-mon[74207]: 3.1c scrub ok
Nov 25 09:36:50 compute-0 ceph-mon[74207]: 7.2 scrub starts
Nov 25 09:36:50 compute-0 ceph-mon[74207]: 7.2 scrub ok
Nov 25 09:36:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:50 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:36:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v120: 337 pgs: 1 unknown, 2 active+remapped, 334 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 3 objects/s recovering
Nov 25 09:36:51 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 25 09:36:51 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 25 09:36:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:51.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:51 compute-0 sudo[111316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmvzaxfequggfblujwubaylayokkcsqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063411.382471-558-140805923017881/AnsiballZ_file.py'
Nov 25 09:36:51 compute-0 sudo[111316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:51 compute-0 python3.9[111318]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:36:51 compute-0 sudo[111316]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:51 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 25 09:36:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 25 09:36:51 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 25 09:36:51 compute-0 ceph-mon[74207]: 4.9 scrub starts
Nov 25 09:36:51 compute-0 ceph-mon[74207]: 4.9 scrub ok
Nov 25 09:36:51 compute-0 ceph-mon[74207]: osdmap e95: 3 total, 3 up, 3 in
Nov 25 09:36:51 compute-0 ceph-mon[74207]: pgmap v120: 337 pgs: 1 unknown, 2 active+remapped, 334 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 3 objects/s recovering
Nov 25 09:36:51 compute-0 ceph-mon[74207]: 8.14 scrub starts
Nov 25 09:36:51 compute-0 ceph-mon[74207]: 8.14 scrub ok
Nov 25 09:36:51 compute-0 ceph-mon[74207]: 7.e scrub starts
Nov 25 09:36:51 compute-0 ceph-mon[74207]: 7.e scrub ok
Nov 25 09:36:51 compute-0 ceph-mon[74207]: osdmap e96: 3 total, 3 up, 3 in
Nov 25 09:36:51 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:36:52 compute-0 sudo[111470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzjsqvzdebxnkrtmngmadbjyjorthlbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063411.9150894-582-120477105923770/AnsiballZ_stat.py'
Nov 25 09:36:52 compute-0 sudo[111470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:52 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:52 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 25 09:36:52 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 25 09:36:52 compute-0 python3.9[111472]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:36:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093652 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:36:52 compute-0 sudo[111470]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:52 compute-0 sudo[111548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxwzenwkngbhkzcuweglfgysxlhxklll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063411.9150894-582-120477105923770/AnsiballZ_file.py'
Nov 25 09:36:52 compute-0 sudo[111548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:52 compute-0 python3.9[111550]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:36:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 25 09:36:52 compute-0 sudo[111548]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 25 09:36:52 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97 pruub=15.264823914s) [0] async=[0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 279.703552246s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:52 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97 pruub=15.264730453s) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 279.703552246s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:52 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 25 09:36:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:52 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:52 compute-0 ceph-mon[74207]: 2.f scrub starts
Nov 25 09:36:52 compute-0 ceph-mon[74207]: 2.f scrub ok
Nov 25 09:36:52 compute-0 ceph-mon[74207]: 8.10 scrub starts
Nov 25 09:36:52 compute-0 ceph-mon[74207]: 8.10 scrub ok
Nov 25 09:36:52 compute-0 ceph-mon[74207]: 10.2 scrub starts
Nov 25 09:36:52 compute-0 ceph-mon[74207]: 10.2 scrub ok
Nov 25 09:36:52 compute-0 ceph-mon[74207]: osdmap e97: 3 total, 3 up, 3 in
Nov 25 09:36:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v123: 337 pgs: 1 unknown, 2 active+remapped, 334 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 3 objects/s recovering
Nov 25 09:36:53 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 25 09:36:53 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 25 09:36:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:36:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:53.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:36:53 compute-0 sudo[111700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlbxlwpyjmudivekqsdjgouiohmuvzbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063413.403643-645-96631351119875/AnsiballZ_stat.py'
Nov 25 09:36:53 compute-0 sudo[111700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 25 09:36:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 25 09:36:53 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 25 09:36:53 compute-0 python3.9[111702]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:36:53 compute-0 sudo[111700]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:53 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb4005130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:53 compute-0 ceph-mon[74207]: 7.a scrub starts
Nov 25 09:36:53 compute-0 ceph-mon[74207]: 7.a scrub ok
Nov 25 09:36:53 compute-0 ceph-mon[74207]: pgmap v123: 337 pgs: 1 unknown, 2 active+remapped, 334 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 3 objects/s recovering
Nov 25 09:36:53 compute-0 ceph-mon[74207]: 7.f scrub starts
Nov 25 09:36:53 compute-0 ceph-mon[74207]: 5.1 scrub starts
Nov 25 09:36:53 compute-0 ceph-mon[74207]: 7.f scrub ok
Nov 25 09:36:53 compute-0 ceph-mon[74207]: 5.1 scrub ok
Nov 25 09:36:53 compute-0 ceph-mon[74207]: osdmap e98: 3 total, 3 up, 3 in
Nov 25 09:36:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:54 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:54 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 25 09:36:54 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 25 09:36:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:54 compute-0 sudo[111856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdbeamksaowgmntcyllpyzlnwfwotkoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063414.3081446-684-103456558139858/AnsiballZ_getent.py'
Nov 25 09:36:54 compute-0 sudo[111856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:54 compute-0 python3.9[111858]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 09:36:54 compute-0 sudo[111856]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:54 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:54 compute-0 ceph-mon[74207]: 8.5 scrub starts
Nov 25 09:36:54 compute-0 ceph-mon[74207]: 8.5 scrub ok
Nov 25 09:36:54 compute-0 ceph-mon[74207]: 11.f scrub starts
Nov 25 09:36:54 compute-0 ceph-mon[74207]: 11.f scrub ok
Nov 25 09:36:54 compute-0 ceph-mon[74207]: 5.3 scrub starts
Nov 25 09:36:54 compute-0 ceph-mon[74207]: 5.3 scrub ok
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v125: 337 pgs: 1 unknown, 2 active+remapped, 334 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:36:55 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Nov 25 09:36:55 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Nov 25 09:36:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:55.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.2892654815393633e-06 of space, bias 1.0, pg target 0.000686779644461809 quantized to 32 (current 32)
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:36:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:36:55 compute-0 sudo[112009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efebdcaklbybahfhzrzgfwdzbvcxodyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063415.108838-714-21184724453736/AnsiballZ_getent.py'
Nov 25 09:36:55 compute-0 sudo[112009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:55 compute-0 python3.9[112011]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 09:36:55 compute-0 sudo[112009]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:55 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:55 compute-0 sudo[112164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnbowrysgjaujwhdkwnihfyafncyvujj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063415.610352-738-271706609939439/AnsiballZ_group.py'
Nov 25 09:36:55 compute-0 sudo[112164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:55 compute-0 ceph-mon[74207]: 4.8 scrub starts
Nov 25 09:36:55 compute-0 ceph-mon[74207]: 4.8 scrub ok
Nov 25 09:36:55 compute-0 ceph-mon[74207]: pgmap v125: 337 pgs: 1 unknown, 2 active+remapped, 334 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:36:55 compute-0 ceph-mon[74207]: 2.4 deep-scrub starts
Nov 25 09:36:55 compute-0 ceph-mon[74207]: 2.4 deep-scrub ok
Nov 25 09:36:55 compute-0 ceph-mon[74207]: 8.8 deep-scrub starts
Nov 25 09:36:55 compute-0 ceph-mon[74207]: 8.8 deep-scrub ok
Nov 25 09:36:56 compute-0 python3.9[112166]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 09:36:56 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Nov 25 09:36:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:56 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:56 compute-0 sudo[112164]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:56 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Nov 25 09:36:56 compute-0 sudo[112318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okmbulluvkdewowbllzndqipqcrnqzog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063416.3540308-765-207835173719630/AnsiballZ_file.py'
Nov 25 09:36:56 compute-0 sudo[112318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:56 compute-0 python3.9[112320]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 09:36:56 compute-0 sudo[112318]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:56 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:56 compute-0 ceph-mon[74207]: 11.19 scrub starts
Nov 25 09:36:56 compute-0 ceph-mon[74207]: 11.19 scrub ok
Nov 25 09:36:56 compute-0 ceph-mon[74207]: 3.f scrub starts
Nov 25 09:36:56 compute-0 ceph-mon[74207]: 3.f scrub ok
Nov 25 09:36:56 compute-0 ceph-mon[74207]: 12.8 scrub starts
Nov 25 09:36:56 compute-0 ceph-mon[74207]: 12.8 scrub ok
Nov 25 09:36:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v126: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 13 op/s; 0 B/s, 0 objects/s recovering
Nov 25 09:36:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Nov 25 09:36:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 25 09:36:57 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 25 09:36:57 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 25 09:36:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:57.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:57 compute-0 sudo[112470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyehmnsqltublpzfgpdpkjwsjjpbttbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063417.1174424-798-101261962221603/AnsiballZ_dnf.py'
Nov 25 09:36:57 compute-0 sudo[112470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:57 compute-0 python3.9[112472]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:36:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:36:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:57 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fcc002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 25 09:36:58 compute-0 ceph-mon[74207]: 7.16 scrub starts
Nov 25 09:36:58 compute-0 ceph-mon[74207]: 7.16 scrub ok
Nov 25 09:36:58 compute-0 ceph-mon[74207]: 5.9 scrub starts
Nov 25 09:36:58 compute-0 ceph-mon[74207]: 5.9 scrub ok
Nov 25 09:36:58 compute-0 ceph-mon[74207]: pgmap v126: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 13 op/s; 0 B/s, 0 objects/s recovering
Nov 25 09:36:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 25 09:36:58 compute-0 ceph-mon[74207]: 2.6 scrub starts
Nov 25 09:36:58 compute-0 ceph-mon[74207]: 2.6 scrub ok
Nov 25 09:36:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 25 09:36:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 25 09:36:58 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 25 09:36:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:58 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fd4003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:58 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Nov 25 09:36:58 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Nov 25 09:36:58 compute-0 sudo[112470]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:58.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:58 compute-0 sudo[112625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdudqofwkamsrwucvilloakevvlycexe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063418.6377947-822-253430438189597/AnsiballZ_file.py'
Nov 25 09:36:58 compute-0 sudo[112625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:58 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:58 compute-0 python3.9[112627]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:36:58 compute-0 sudo[112625]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:59 compute-0 ceph-mon[74207]: 12.1e scrub starts
Nov 25 09:36:59 compute-0 ceph-mon[74207]: 12.1e scrub ok
Nov 25 09:36:59 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 25 09:36:59 compute-0 ceph-mon[74207]: osdmap e99: 3 total, 3 up, 3 in
Nov 25 09:36:59 compute-0 ceph-mon[74207]: 8.4 deep-scrub starts
Nov 25 09:36:59 compute-0 ceph-mon[74207]: 8.4 deep-scrub ok
Nov 25 09:36:59 compute-0 ceph-mon[74207]: 7.3 deep-scrub starts
Nov 25 09:36:59 compute-0 ceph-mon[74207]: 7.3 deep-scrub ok
Nov 25 09:36:59 compute-0 sudo[112628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:36:59 compute-0 sudo[112628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:36:59 compute-0 sudo[112628]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v128: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 13 op/s; 0 B/s, 0 objects/s recovering
Nov 25 09:36:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Nov 25 09:36:59 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 25 09:36:59 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 25 09:36:59 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 25 09:36:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:36:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:36:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:59.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:36:59 compute-0 sudo[112802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyzazdahimnepvzhtlfumbirykxytsxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063419.1410456-846-182232752326171/AnsiballZ_stat.py'
Nov 25 09:36:59 compute-0 sudo[112802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:59 compute-0 python3.9[112804]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:36:59 compute-0 sudo[112802]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:59 compute-0 sudo[112881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmrawrwdieafyzglzcmyiluafkaoomma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063419.1410456-846-182232752326171/AnsiballZ_file.py'
Nov 25 09:36:59 compute-0 sudo[112881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:36:59 compute-0 python3.9[112883]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:36:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:36:59 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:36:59 compute-0 sudo[112881]: pam_unix(sudo:session): session closed for user root
Nov 25 09:36:59 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99 pruub=14.363901138s) [0] r=-1 lpr=99 pi=[51,99)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 286.002502441s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:36:59 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99 pruub=14.363644600s) [0] r=-1 lpr=99 pi=[51,99)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.002502441s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:36:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:36:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:37:00 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 25 09:37:00 compute-0 ceph-mon[74207]: 2.13 deep-scrub starts
Nov 25 09:37:00 compute-0 ceph-mon[74207]: 2.13 deep-scrub ok
Nov 25 09:37:00 compute-0 ceph-mon[74207]: 11.7 deep-scrub starts
Nov 25 09:37:00 compute-0 ceph-mon[74207]: 11.7 deep-scrub ok
Nov 25 09:37:00 compute-0 ceph-mon[74207]: pgmap v128: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 13 op/s; 0 B/s, 0 objects/s recovering
Nov 25 09:37:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 25 09:37:00 compute-0 ceph-mon[74207]: 5.c scrub starts
Nov 25 09:37:00 compute-0 ceph-mon[74207]: 5.c scrub ok
Nov 25 09:37:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:37:00 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 25 09:37:00 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 25 09:37:00 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 25 09:37:00 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100 pruub=14.240377426s) [0] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 286.005401611s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:00 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100 pruub=14.240349770s) [0] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.005401611s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:00 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:00 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:00 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fcc003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:00 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 25 09:37:00 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 25 09:37:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:37:00] "GET /metrics HTTP/1.1" 200 48370 "" "Prometheus/2.51.0"
Nov 25 09:37:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:37:00] "GET /metrics HTTP/1.1" 200 48370 "" "Prometheus/2.51.0"
Nov 25 09:37:00 compute-0 sudo[113034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tieqveqqlmjyrgnsmtabgsdrjbyqrwpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063420.0656724-885-177639138323048/AnsiballZ_stat.py'
Nov 25 09:37:00 compute-0 sudo[113034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:00 compute-0 python3.9[113036]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:37:00 compute-0 sudo[113034]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:00.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:00 compute-0 sudo[113112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auecwoacqfflkcqumdeevirgqfujcejs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063420.0656724-885-177639138323048/AnsiballZ_file.py'
Nov 25 09:37:00 compute-0 sudo[113112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:00 compute-0 python3.9[113114]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:37:00 compute-0 sudo[113112]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:00 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fd4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 25 09:37:01 compute-0 ceph-mon[74207]: 4.14 scrub starts
Nov 25 09:37:01 compute-0 ceph-mon[74207]: 4.14 scrub ok
Nov 25 09:37:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 25 09:37:01 compute-0 ceph-mon[74207]: osdmap e100: 3 total, 3 up, 3 in
Nov 25 09:37:01 compute-0 ceph-mon[74207]: 5.f scrub starts
Nov 25 09:37:01 compute-0 ceph-mon[74207]: 5.f scrub ok
Nov 25 09:37:01 compute-0 ceph-mon[74207]: 7.9 scrub starts
Nov 25 09:37:01 compute-0 ceph-mon[74207]: 7.9 scrub ok
Nov 25 09:37:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 25 09:37:01 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 25 09:37:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:01 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v131: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 09:37:01 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 25 09:37:01 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 25 09:37:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:01.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:01 compute-0 sudo[113264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfmbstarwbxiahhlohpmldzjpxsxcymz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063421.1982925-930-266471522687936/AnsiballZ_dnf.py'
Nov 25 09:37:01 compute-0 sudo[113264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:01 compute-0 python3.9[113266]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:01 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 25 09:37:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 25 09:37:02 compute-0 ceph-mon[74207]: 7.14 scrub starts
Nov 25 09:37:02 compute-0 ceph-mon[74207]: 7.14 scrub ok
Nov 25 09:37:02 compute-0 ceph-mon[74207]: osdmap e101: 3 total, 3 up, 3 in
Nov 25 09:37:02 compute-0 ceph-mon[74207]: 11.1 scrub starts
Nov 25 09:37:02 compute-0 ceph-mon[74207]: 11.1 scrub ok
Nov 25 09:37:02 compute-0 ceph-mon[74207]: pgmap v131: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 09:37:02 compute-0 ceph-mon[74207]: 5.a scrub starts
Nov 25 09:37:02 compute-0 ceph-mon[74207]: 5.a scrub ok
Nov 25 09:37:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102 pruub=14.992504120s) [0] async=[0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 288.773162842s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102 pruub=14.992448807s) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 288.773162842s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 25 09:37:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:02 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:02 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 25 09:37:02 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 25 09:37:02 compute-0 sudo[113264]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:02.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:37:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 25 09:37:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 25 09:37:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103 pruub=15.344932556s) [0] async=[0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 289.785827637s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:02 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103 pruub=15.344882011s) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 289.785827637s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:02 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 25 09:37:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:02 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fcc003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:03 compute-0 ceph-mon[74207]: 12.1d scrub starts
Nov 25 09:37:03 compute-0 ceph-mon[74207]: 12.1d scrub ok
Nov 25 09:37:03 compute-0 ceph-mon[74207]: osdmap e102: 3 total, 3 up, 3 in
Nov 25 09:37:03 compute-0 ceph-mon[74207]: 4.c scrub starts
Nov 25 09:37:03 compute-0 ceph-mon[74207]: 4.c scrub ok
Nov 25 09:37:03 compute-0 ceph-mon[74207]: 10.5 scrub starts
Nov 25 09:37:03 compute-0 ceph-mon[74207]: 10.5 scrub ok
Nov 25 09:37:03 compute-0 ceph-mon[74207]: osdmap e103: 3 total, 3 up, 3 in
Nov 25 09:37:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v134: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:03 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 25 09:37:03 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 25 09:37:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:03 compute-0 python3.9[113419]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:37:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 25 09:37:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 25 09:37:03 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 25 09:37:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:03 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fd4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:03 compute-0 python3.9[113572]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 09:37:04 compute-0 ceph-mon[74207]: 10.4 scrub starts
Nov 25 09:37:04 compute-0 ceph-mon[74207]: 10.4 scrub ok
Nov 25 09:37:04 compute-0 ceph-mon[74207]: 4.d scrub starts
Nov 25 09:37:04 compute-0 ceph-mon[74207]: 4.d scrub ok
Nov 25 09:37:04 compute-0 ceph-mon[74207]: pgmap v134: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:04 compute-0 ceph-mon[74207]: 7.8 scrub starts
Nov 25 09:37:04 compute-0 ceph-mon[74207]: 7.8 scrub ok
Nov 25 09:37:04 compute-0 ceph-mon[74207]: osdmap e104: 3 total, 3 up, 3 in
Nov 25 09:37:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:04 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 25 09:37:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 25 09:37:04 compute-0 python3.9[113723]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:37:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:37:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:04.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:37:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:04 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:05 compute-0 ceph-mon[74207]: 7.11 scrub starts
Nov 25 09:37:05 compute-0 ceph-mon[74207]: 7.11 scrub ok
Nov 25 09:37:05 compute-0 ceph-mon[74207]: 4.a scrub starts
Nov 25 09:37:05 compute-0 ceph-mon[74207]: 4.a scrub ok
Nov 25 09:37:05 compute-0 ceph-mon[74207]: 3.17 scrub starts
Nov 25 09:37:05 compute-0 ceph-mon[74207]: 3.17 scrub ok
Nov 25 09:37:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v136: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:05 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 25 09:37:05 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 25 09:37:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:05.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:05 compute-0 sudo[113873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tithkhxzvewfkzqeaiytnykivsijqdcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063424.9142-1053-210119833689582/AnsiballZ_systemd.py'
Nov 25 09:37:05 compute-0 sudo[113873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:05 compute-0 python3.9[113875]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:37:05 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 09:37:05 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 09:37:05 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 09:37:05 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 09:37:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:05 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fcc003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:05 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 09:37:05 compute-0 sudo[113873]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:06 compute-0 ceph-mon[74207]: 12.2 scrub starts
Nov 25 09:37:06 compute-0 ceph-mon[74207]: 12.2 scrub ok
Nov 25 09:37:06 compute-0 ceph-mon[74207]: 11.5 scrub starts
Nov 25 09:37:06 compute-0 ceph-mon[74207]: 11.5 scrub ok
Nov 25 09:37:06 compute-0 ceph-mon[74207]: pgmap v136: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:06 compute-0 ceph-mon[74207]: 7.13 scrub starts
Nov 25 09:37:06 compute-0 ceph-mon[74207]: 7.13 scrub ok
Nov 25 09:37:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:06 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fd4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:06 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Nov 25 09:37:06 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Nov 25 09:37:06 compute-0 python3.9[114038]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 09:37:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:37:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:06.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:37:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:06 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:07 compute-0 ceph-mon[74207]: 2.10 scrub starts
Nov 25 09:37:07 compute-0 ceph-mon[74207]: 2.10 scrub ok
Nov 25 09:37:07 compute-0 ceph-mon[74207]: 8.1b scrub starts
Nov 25 09:37:07 compute-0 ceph-mon[74207]: 8.1b scrub ok
Nov 25 09:37:07 compute-0 ceph-mon[74207]: 12.19 scrub starts
Nov 25 09:37:07 compute-0 ceph-mon[74207]: 12.19 scrub ok
Nov 25 09:37:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v137: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 13 op/s; 36 B/s, 1 objects/s recovering
Nov 25 09:37:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Nov 25 09:37:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 25 09:37:07 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Nov 25 09:37:07 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Nov 25 09:37:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:07.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:37:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:07 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb0004f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 25 09:37:08 compute-0 ceph-mon[74207]: 4.15 scrub starts
Nov 25 09:37:08 compute-0 ceph-mon[74207]: 4.15 scrub ok
Nov 25 09:37:08 compute-0 ceph-mon[74207]: 5.16 scrub starts
Nov 25 09:37:08 compute-0 ceph-mon[74207]: 5.16 scrub ok
Nov 25 09:37:08 compute-0 ceph-mon[74207]: pgmap v137: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 13 op/s; 36 B/s, 1 objects/s recovering
Nov 25 09:37:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 25 09:37:08 compute-0 ceph-mon[74207]: 12.1c scrub starts
Nov 25 09:37:08 compute-0 ceph-mon[74207]: 12.1c scrub ok
Nov 25 09:37:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 25 09:37:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 25 09:37:08 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 25 09:37:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:08 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fcc0045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:08 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 25 09:37:08 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 25 09:37:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:08.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:08 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fd40057d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:09 compute-0 ceph-mon[74207]: 2.15 scrub starts
Nov 25 09:37:09 compute-0 ceph-mon[74207]: 2.15 scrub ok
Nov 25 09:37:09 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 25 09:37:09 compute-0 ceph-mon[74207]: osdmap e105: 3 total, 3 up, 3 in
Nov 25 09:37:09 compute-0 ceph-mon[74207]: 5.7 scrub starts
Nov 25 09:37:09 compute-0 ceph-mon[74207]: 5.7 scrub ok
Nov 25 09:37:09 compute-0 ceph-mon[74207]: 7.10 scrub starts
Nov 25 09:37:09 compute-0 ceph-mon[74207]: 7.10 scrub ok
Nov 25 09:37:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v139: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 12 op/s; 34 B/s, 1 objects/s recovering
Nov 25 09:37:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Nov 25 09:37:09 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 25 09:37:09 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 25 09:37:09 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 25 09:37:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:09.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:09 compute-0 sudo[114190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftadmhufcwkaplqvskduswxswfvosewj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063429.483251-1224-98589316466602/AnsiballZ_systemd.py'
Nov 25 09:37:09 compute-0 sudo[114190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:09 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc0043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:09 compute-0 python3.9[114193]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:37:09 compute-0 sudo[114190]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 25 09:37:10 compute-0 ceph-mon[74207]: 10.11 scrub starts
Nov 25 09:37:10 compute-0 ceph-mon[74207]: 10.11 scrub ok
Nov 25 09:37:10 compute-0 ceph-mon[74207]: pgmap v139: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 12 op/s; 34 B/s, 1 objects/s recovering
Nov 25 09:37:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 25 09:37:10 compute-0 ceph-mon[74207]: 5.14 scrub starts
Nov 25 09:37:10 compute-0 ceph-mon[74207]: 5.14 scrub ok
Nov 25 09:37:10 compute-0 ceph-mon[74207]: 11.4 scrub starts
Nov 25 09:37:10 compute-0 ceph-mon[74207]: 11.4 scrub ok
Nov 25 09:37:10 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 25 09:37:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 25 09:37:10 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 25 09:37:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:10 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb0004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:10 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 25 09:37:10 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 25 09:37:10 compute-0 sudo[114346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vstksiqhbmqnzffphfsipislghcauvps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063430.0517538-1224-88086358671557/AnsiballZ_systemd.py'
Nov 25 09:37:10 compute-0 sudo[114346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:37:10] "GET /metrics HTTP/1.1" 200 48370 "" "Prometheus/2.51.0"
Nov 25 09:37:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:37:10] "GET /metrics HTTP/1.1" 200 48370 "" "Prometheus/2.51.0"
Nov 25 09:37:10 compute-0 python3.9[114348]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:37:10 compute-0 sudo[114346]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:10.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:10 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fcc0045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:11 compute-0 sshd-session[106689]: Connection closed by 192.168.122.30 port 51578
Nov 25 09:37:11 compute-0 sshd-session[106686]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:37:11 compute-0 systemd-logind[744]: Session 39 logged out. Waiting for processes to exit.
Nov 25 09:37:11 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 25 09:37:11 compute-0 systemd[1]: session-39.scope: Consumed 46.791s CPU time.
Nov 25 09:37:11 compute-0 systemd-logind[744]: Removed session 39.
Nov 25 09:37:11 compute-0 ceph-mon[74207]: 2.12 scrub starts
Nov 25 09:37:11 compute-0 ceph-mon[74207]: 2.12 scrub ok
Nov 25 09:37:11 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 25 09:37:11 compute-0 ceph-mon[74207]: osdmap e106: 3 total, 3 up, 3 in
Nov 25 09:37:11 compute-0 ceph-mon[74207]: 3.12 scrub starts
Nov 25 09:37:11 compute-0 ceph-mon[74207]: 3.12 scrub ok
Nov 25 09:37:11 compute-0 ceph-mon[74207]: 5.2 scrub starts
Nov 25 09:37:11 compute-0 ceph-mon[74207]: 5.2 scrub ok
Nov 25 09:37:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v141: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 10 op/s; 29 B/s, 1 objects/s recovering
Nov 25 09:37:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Nov 25 09:37:11 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 25 09:37:11 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 25 09:37:11 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 25 09:37:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:37:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:11.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:37:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:11 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fd40057d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 25 09:37:12 compute-0 ceph-mon[74207]: 8.1f scrub starts
Nov 25 09:37:12 compute-0 ceph-mon[74207]: 8.1f scrub ok
Nov 25 09:37:12 compute-0 ceph-mon[74207]: pgmap v141: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 10 op/s; 29 B/s, 1 objects/s recovering
Nov 25 09:37:12 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 25 09:37:12 compute-0 ceph-mon[74207]: 11.1b scrub starts
Nov 25 09:37:12 compute-0 ceph-mon[74207]: 5.17 scrub starts
Nov 25 09:37:12 compute-0 ceph-mon[74207]: 11.1b scrub ok
Nov 25 09:37:12 compute-0 ceph-mon[74207]: 5.17 scrub ok
Nov 25 09:37:12 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 25 09:37:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 25 09:37:12 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 25 09:37:12 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 25 09:37:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:12 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:12 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 25 09:37:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:12.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:37:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 25 09:37:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 25 09:37:12 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 25 09:37:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:12 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb0004f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:13 compute-0 ceph-mon[74207]: 12.17 scrub starts
Nov 25 09:37:13 compute-0 ceph-mon[74207]: 12.17 scrub ok
Nov 25 09:37:13 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 25 09:37:13 compute-0 ceph-mon[74207]: osdmap e107: 3 total, 3 up, 3 in
Nov 25 09:37:13 compute-0 ceph-mon[74207]: 10.18 scrub starts
Nov 25 09:37:13 compute-0 ceph-mon[74207]: 10.18 scrub ok
Nov 25 09:37:13 compute-0 ceph-mon[74207]: 8.18 deep-scrub starts
Nov 25 09:37:13 compute-0 ceph-mon[74207]: 8.18 deep-scrub ok
Nov 25 09:37:13 compute-0 ceph-mon[74207]: osdmap e108: 3 total, 3 up, 3 in
Nov 25 09:37:13 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 25 09:37:13 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 25 09:37:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v144: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Nov 25 09:37:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 25 09:37:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:37:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:13.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:37:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 25 09:37:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 25 09:37:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 25 09:37:13 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 25 09:37:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:13 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fcc0045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:14 compute-0 ceph-mon[74207]: 7.1d scrub starts
Nov 25 09:37:14 compute-0 ceph-mon[74207]: 7.1d scrub ok
Nov 25 09:37:14 compute-0 ceph-mon[74207]: 10.19 scrub starts
Nov 25 09:37:14 compute-0 ceph-mon[74207]: 10.19 scrub ok
Nov 25 09:37:14 compute-0 ceph-mon[74207]: pgmap v144: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 25 09:37:14 compute-0 ceph-mon[74207]: 3.13 scrub starts
Nov 25 09:37:14 compute-0 ceph-mon[74207]: 3.13 scrub ok
Nov 25 09:37:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 25 09:37:14 compute-0 ceph-mon[74207]: osdmap e109: 3 total, 3 up, 3 in
Nov 25 09:37:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:14 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fcc0045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:14 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 25 09:37:14 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 25 09:37:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:14.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:37:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:37:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:37:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:37:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:37:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:37:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:37:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:37:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:14 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 25 09:37:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 25 09:37:15 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 25 09:37:15 compute-0 ceph-mon[74207]: 10.10 scrub starts
Nov 25 09:37:15 compute-0 ceph-mon[74207]: 10.10 scrub ok
Nov 25 09:37:15 compute-0 ceph-mon[74207]: 5.15 scrub starts
Nov 25 09:37:15 compute-0 ceph-mon[74207]: 10.1b scrub starts
Nov 25 09:37:15 compute-0 ceph-mon[74207]: 5.15 scrub ok
Nov 25 09:37:15 compute-0 ceph-mon[74207]: 10.1b scrub ok
Nov 25 09:37:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:37:15 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 25 09:37:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v147: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Nov 25 09:37:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 25 09:37:15 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 25 09:37:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:15.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:15 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 25 09:37:16 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 25 09:37:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 25 09:37:16 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 25 09:37:16 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 25 09:37:16 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 25 09:37:16 compute-0 ceph-mon[74207]: 2.18 scrub starts
Nov 25 09:37:16 compute-0 ceph-mon[74207]: 2.18 scrub ok
Nov 25 09:37:16 compute-0 ceph-mon[74207]: osdmap e110: 3 total, 3 up, 3 in
Nov 25 09:37:16 compute-0 ceph-mon[74207]: 11.1a scrub starts
Nov 25 09:37:16 compute-0 ceph-mon[74207]: 7.1b scrub starts
Nov 25 09:37:16 compute-0 ceph-mon[74207]: 11.1a scrub ok
Nov 25 09:37:16 compute-0 ceph-mon[74207]: pgmap v147: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:16 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 25 09:37:16 compute-0 ceph-mon[74207]: 7.1b scrub ok
Nov 25 09:37:16 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 25 09:37:16 compute-0 ceph-mon[74207]: osdmap e111: 3 total, 3 up, 3 in
Nov 25 09:37:16 compute-0 sshd-session[114381]: Accepted publickey for zuul from 192.168.122.30 port 54406 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:37:16 compute-0 systemd-logind[744]: New session 40 of user zuul.
Nov 25 09:37:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:16 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:16 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 25 09:37:16 compute-0 sshd-session[114381]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:37:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:16.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:16 compute-0 python3.9[114534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:37:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:16 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:17 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Nov 25 09:37:17 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Nov 25 09:37:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v149: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Nov 25 09:37:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 25 09:37:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 25 09:37:17 compute-0 ceph-mon[74207]: 4.1f scrub starts
Nov 25 09:37:17 compute-0 ceph-mon[74207]: 4.1f scrub ok
Nov 25 09:37:17 compute-0 ceph-mon[74207]: 2.1e scrub starts
Nov 25 09:37:17 compute-0 ceph-mon[74207]: 2.1e scrub ok
Nov 25 09:37:17 compute-0 ceph-mon[74207]: 3.10 scrub starts
Nov 25 09:37:17 compute-0 ceph-mon[74207]: 3.10 scrub ok
Nov 25 09:37:17 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 25 09:37:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:17.170Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:17.174Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:17.177Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:17.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:37:17 compute-0 sudo[114689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omroqaorfofgotppmxlooeewjyydtohd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063437.483056-68-47503680779352/AnsiballZ_getent.py'
Nov 25 09:37:17 compute-0 sudo[114689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:17 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fcc0045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:17 compute-0 python3.9[114691]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 09:37:17 compute-0 sudo[114689]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:18 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.b deep-scrub starts
Nov 25 09:37:18 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.b deep-scrub ok
Nov 25 09:37:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 25 09:37:18 compute-0 ceph-mon[74207]: 8.11 scrub starts
Nov 25 09:37:18 compute-0 ceph-mon[74207]: 8.11 scrub ok
Nov 25 09:37:18 compute-0 ceph-mon[74207]: 5.19 deep-scrub starts
Nov 25 09:37:18 compute-0 ceph-mon[74207]: 5.19 deep-scrub ok
Nov 25 09:37:18 compute-0 ceph-mon[74207]: pgmap v149: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Nov 25 09:37:18 compute-0 ceph-mon[74207]: 4.13 deep-scrub starts
Nov 25 09:37:18 compute-0 ceph-mon[74207]: osdmap e112: 3 total, 3 up, 3 in
Nov 25 09:37:18 compute-0 ceph-mon[74207]: 4.13 deep-scrub ok
Nov 25 09:37:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:18 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fd40064e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 25 09:37:18 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 25 09:37:18 compute-0 sudo[114843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyecucjotggwkncntxgxdvvvcugiebmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063438.3496704-104-123201910929182/AnsiballZ_setup.py'
Nov 25 09:37:18 compute-0 sudo[114843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:18.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:18 compute-0 python3.9[114845]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:37:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:18 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fbc004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:18 compute-0 sudo[114843]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:19 compute-0 sudo[114854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:19 compute-0 sudo[114854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:19 compute-0 sudo[114854]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:19 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 25 09:37:19 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 25 09:37:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v152: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 25 09:37:19 compute-0 ceph-mon[74207]: 7.1f scrub starts
Nov 25 09:37:19 compute-0 ceph-mon[74207]: 7.1f scrub ok
Nov 25 09:37:19 compute-0 ceph-mon[74207]: 12.b deep-scrub starts
Nov 25 09:37:19 compute-0 ceph-mon[74207]: 12.b deep-scrub ok
Nov 25 09:37:19 compute-0 ceph-mon[74207]: 11.1c deep-scrub starts
Nov 25 09:37:19 compute-0 ceph-mon[74207]: 11.1c deep-scrub ok
Nov 25 09:37:19 compute-0 ceph-mon[74207]: osdmap e113: 3 total, 3 up, 3 in
Nov 25 09:37:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:19.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:19 compute-0 sudo[114952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvkiylpzighyrjjqnuxbjozghwrfujhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063438.3496704-104-123201910929182/AnsiballZ_dnf.py'
Nov 25 09:37:19 compute-0 sudo[114952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:19 compute-0 python3.9[114954]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 09:37:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:19 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fcc0045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:20 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 25 09:37:20 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 25 09:37:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:20 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fd4006e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:20 compute-0 ceph-mon[74207]: 11.13 scrub starts
Nov 25 09:37:20 compute-0 ceph-mon[74207]: 11.13 scrub ok
Nov 25 09:37:20 compute-0 ceph-mon[74207]: 5.10 deep-scrub starts
Nov 25 09:37:20 compute-0 ceph-mon[74207]: 10.15 scrub starts
Nov 25 09:37:20 compute-0 ceph-mon[74207]: 5.10 deep-scrub ok
Nov 25 09:37:20 compute-0 ceph-mon[74207]: 10.15 scrub ok
Nov 25 09:37:20 compute-0 ceph-mon[74207]: pgmap v152: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 25 09:37:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:37:20] "GET /metrics HTTP/1.1" 200 48356 "" "Prometheus/2.51.0"
Nov 25 09:37:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:37:20] "GET /metrics HTTP/1.1" 200 48356 "" "Prometheus/2.51.0"
Nov 25 09:37:20 compute-0 sudo[114952]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:20.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:20 compute-0 sudo[115107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctorkmkuidrxunpwbxpugqrsysapvxgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063440.7312477-146-225298537110599/AnsiballZ_dnf.py'
Nov 25 09:37:20 compute-0 sudo[115107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[108869]: 25/11/2025 09:37:20 : epoch 6925788c : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6fb0004f90 fd 48 proxy ignored for local
Nov 25 09:37:20 compute-0 kernel: ganesha.nfsd[108981]: segfault at 50 ip 00007f706c90632e sp 00007f70327fb210 error 4 in libntirpc.so.5.8[7f706c8eb000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 25 09:37:20 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:37:20 compute-0 systemd[1]: Started Process Core Dump (PID 115110/UID 0).
Nov 25 09:37:21 compute-0 python3.9[115109]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:21 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.a scrub starts
Nov 25 09:37:21 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.a scrub ok
Nov 25 09:37:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v153: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 0 objects/s recovering
Nov 25 09:37:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Nov 25 09:37:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 25 09:37:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 25 09:37:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 25 09:37:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 25 09:37:21 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 25 09:37:21 compute-0 ceph-mon[74207]: 10.12 scrub starts
Nov 25 09:37:21 compute-0 ceph-mon[74207]: 10.12 scrub ok
Nov 25 09:37:21 compute-0 ceph-mon[74207]: 10.14 scrub starts
Nov 25 09:37:21 compute-0 ceph-mon[74207]: 10.14 scrub ok
Nov 25 09:37:21 compute-0 ceph-mon[74207]: 11.1e scrub starts
Nov 25 09:37:21 compute-0 ceph-mon[74207]: 11.1e scrub ok
Nov 25 09:37:21 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 25 09:37:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:21.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:21 compute-0 systemd-coredump[115111]: Process 108873 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 44:
                                                    #0  0x00007f706c90632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:37:21 compute-0 systemd[1]: systemd-coredump@1-115110-0.service: Deactivated successfully.
Nov 25 09:37:21 compute-0 podman[115119]: 2025-11-25 09:37:21.955282413 +0000 UTC m=+0.018283226 container died 2fef05441996c67a6cc0b95ce6ad83d10e00a4df6c227d7d108d23ac7279dd8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:37:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d097a945a5381c4285fe6ab24fca8dfb1b2fd891fd97c5ceca70b36e24d9ed8b-merged.mount: Deactivated successfully.
Nov 25 09:37:21 compute-0 podman[115119]: 2025-11-25 09:37:21.973165034 +0000 UTC m=+0.036165836 container remove 2fef05441996c67a6cc0b95ce6ad83d10e00a4df6c227d7d108d23ac7279dd8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:37:21 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:37:22 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:37:22 compute-0 sudo[115107]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:22 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Nov 25 09:37:22 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Nov 25 09:37:22 compute-0 ceph-mon[74207]: 10.1e scrub starts
Nov 25 09:37:22 compute-0 ceph-mon[74207]: 10.1e scrub ok
Nov 25 09:37:22 compute-0 ceph-mon[74207]: 3.14 scrub starts
Nov 25 09:37:22 compute-0 ceph-mon[74207]: 12.a scrub starts
Nov 25 09:37:22 compute-0 ceph-mon[74207]: 3.14 scrub ok
Nov 25 09:37:22 compute-0 ceph-mon[74207]: 12.a scrub ok
Nov 25 09:37:22 compute-0 ceph-mon[74207]: pgmap v153: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 0 objects/s recovering
Nov 25 09:37:22 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 25 09:37:22 compute-0 ceph-mon[74207]: osdmap e114: 3 total, 3 up, 3 in
Nov 25 09:37:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:22.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:22 compute-0 sudo[115301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-celohdvmxzcddblmmqyvjzfcrwlljalb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063442.2375665-170-178541830164445/AnsiballZ_systemd.py'
Nov 25 09:37:22 compute-0 sudo[115301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 09:37:22 compute-0 python3.9[115303]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:37:22 compute-0 sudo[115301]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:23 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Nov 25 09:37:23 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Nov 25 09:37:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v155: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 25 09:37:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Nov 25 09:37:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 25 09:37:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 25 09:37:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 25 09:37:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 25 09:37:23 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 25 09:37:23 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:23 compute-0 ceph-mon[74207]: 8.1c scrub starts
Nov 25 09:37:23 compute-0 ceph-mon[74207]: 8.1c scrub ok
Nov 25 09:37:23 compute-0 ceph-mon[74207]: 12.6 scrub starts
Nov 25 09:37:23 compute-0 ceph-mon[74207]: 12.6 scrub ok
Nov 25 09:37:23 compute-0 ceph-mon[74207]: 3.16 scrub starts
Nov 25 09:37:23 compute-0 ceph-mon[74207]: 3.16 scrub ok
Nov 25 09:37:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 25 09:37:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:23.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:23 compute-0 python3.9[115456]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:37:24 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 25 09:37:24 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 25 09:37:24 compute-0 sudo[115608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmbweothnqkiaveqlguyqsoonmkevlie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063443.8093448-224-270395335856607/AnsiballZ_sefcontext.py'
Nov 25 09:37:24 compute-0 sudo[115608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 25 09:37:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 25 09:37:24 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 25 09:37:24 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:24 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:24 compute-0 ceph-mon[74207]: 10.3 scrub starts
Nov 25 09:37:24 compute-0 ceph-mon[74207]: 10.3 scrub ok
Nov 25 09:37:24 compute-0 ceph-mon[74207]: 12.10 scrub starts
Nov 25 09:37:24 compute-0 ceph-mon[74207]: 4.e scrub starts
Nov 25 09:37:24 compute-0 ceph-mon[74207]: 4.e scrub ok
Nov 25 09:37:24 compute-0 ceph-mon[74207]: 12.10 scrub ok
Nov 25 09:37:24 compute-0 ceph-mon[74207]: pgmap v155: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 25 09:37:24 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 25 09:37:24 compute-0 ceph-mon[74207]: osdmap e115: 3 total, 3 up, 3 in
Nov 25 09:37:24 compute-0 ceph-mon[74207]: osdmap e116: 3 total, 3 up, 3 in
Nov 25 09:37:24 compute-0 python3.9[115610]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 09:37:24 compute-0 sudo[115608]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:37:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:24.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:37:25 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 25 09:37:25 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 25 09:37:25 compute-0 python3.9[115760]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:37:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v158: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 25 09:37:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Nov 25 09:37:25 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 25 09:37:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 25 09:37:25 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 25 09:37:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 25 09:37:25 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 25 09:37:25 compute-0 ceph-mon[74207]: 12.7 scrub starts
Nov 25 09:37:25 compute-0 ceph-mon[74207]: 12.7 scrub ok
Nov 25 09:37:25 compute-0 ceph-mon[74207]: 6.4 scrub starts
Nov 25 09:37:25 compute-0 ceph-mon[74207]: 5.1f scrub starts
Nov 25 09:37:25 compute-0 ceph-mon[74207]: 5.1f scrub ok
Nov 25 09:37:25 compute-0 ceph-mon[74207]: 6.4 scrub ok
Nov 25 09:37:25 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 25 09:37:25 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:25.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:25 compute-0 sudo[115917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drjoywdcivdngoaplezbeuvswibwoaog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063445.4990084-278-23410977089590/AnsiballZ_dnf.py'
Nov 25 09:37:25 compute-0 sudo[115917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:25 compute-0 python3.9[115919]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:26 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Nov 25 09:37:26 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Nov 25 09:37:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 25 09:37:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 25 09:37:26 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 25 09:37:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:26 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:26 compute-0 ceph-mon[74207]: 8.c scrub starts
Nov 25 09:37:26 compute-0 ceph-mon[74207]: 5.11 scrub starts
Nov 25 09:37:26 compute-0 ceph-mon[74207]: 5.11 scrub ok
Nov 25 09:37:26 compute-0 ceph-mon[74207]: 8.c scrub ok
Nov 25 09:37:26 compute-0 ceph-mon[74207]: 6.6 scrub starts
Nov 25 09:37:26 compute-0 ceph-mon[74207]: 6.6 scrub ok
Nov 25 09:37:26 compute-0 ceph-mon[74207]: pgmap v158: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 25 09:37:26 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 25 09:37:26 compute-0 ceph-mon[74207]: osdmap e117: 3 total, 3 up, 3 in
Nov 25 09:37:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:26.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:26 compute-0 sudo[115917]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:26.950Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:26.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:26.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:26.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093726 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:37:27 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 25 09:37:27 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 25 09:37:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v161: 337 pgs: 1 unknown, 1 active+remapped, 335 active+clean; 459 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 25 09:37:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 25 09:37:27 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 25 09:37:27 compute-0 ceph-mon[74207]: 8.b scrub starts
Nov 25 09:37:27 compute-0 ceph-mon[74207]: 8.b scrub ok
Nov 25 09:37:27 compute-0 ceph-mon[74207]: 11.1d scrub starts
Nov 25 09:37:27 compute-0 ceph-mon[74207]: 11.1d scrub ok
Nov 25 09:37:27 compute-0 ceph-mon[74207]: 6.0 scrub starts
Nov 25 09:37:27 compute-0 ceph-mon[74207]: 6.0 scrub ok
Nov 25 09:37:27 compute-0 ceph-mon[74207]: osdmap e118: 3 total, 3 up, 3 in
Nov 25 09:37:27 compute-0 ceph-mon[74207]: osdmap e119: 3 total, 3 up, 3 in
Nov 25 09:37:27 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.231187) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447231210, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3178, "num_deletes": 251, "total_data_size": 5291878, "memory_usage": 5382216, "flush_reason": "Manual Compaction"}
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447240339, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 5110746, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7442, "largest_seqno": 10619, "table_properties": {"data_size": 5094817, "index_size": 10247, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4613, "raw_key_size": 42628, "raw_average_key_size": 23, "raw_value_size": 5059393, "raw_average_value_size": 2782, "num_data_blocks": 444, "num_entries": 1818, "num_filter_entries": 1818, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063323, "oldest_key_time": 1764063323, "file_creation_time": 1764063447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9178 microseconds, and 7010 cpu microseconds.
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.240367) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 5110746 bytes OK
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.240379) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.242306) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.242317) EVENT_LOG_v1 {"time_micros": 1764063447242314, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.242328) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 5276629, prev total WAL file size 5276629, number of live WAL files 2.
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.243087) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(4990KB)], [23(12MB)]
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447243108, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 17798634, "oldest_snapshot_seqno": -1}
Nov 25 09:37:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3919 keys, 13861222 bytes, temperature: kUnknown
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447272950, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 13861222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13829355, "index_size": 20941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9861, "raw_key_size": 99583, "raw_average_key_size": 25, "raw_value_size": 13751967, "raw_average_value_size": 3509, "num_data_blocks": 906, "num_entries": 3919, "num_filter_entries": 3919, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764063447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.273225) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 13861222 bytes
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.275758) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 593.3 rd, 462.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.9, 12.1 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(6.2) write-amplify(2.7) OK, records in: 4449, records dropped: 530 output_compression: NoCompression
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.275774) EVENT_LOG_v1 {"time_micros": 1764063447275766, "job": 8, "event": "compaction_finished", "compaction_time_micros": 29997, "compaction_time_cpu_micros": 19996, "output_level": 6, "num_output_files": 1, "total_output_size": 13861222, "num_input_records": 4449, "num_output_records": 3919, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447276696, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447278401, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.243059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.278507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.278511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.278513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.278514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:37:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:37:27.278515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:37:27 compute-0 sudo[116071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpiajnmctikfdtqxcmsaovoygxyatzbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063447.0177217-302-45516973114856/AnsiballZ_command.py'
Nov 25 09:37:27 compute-0 sudo[116071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:27 compute-0 python3.9[116073]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:37:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 25 09:37:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 25 09:37:27 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 25 09:37:27 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:27 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:28 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 25 09:37:28 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 25 09:37:28 compute-0 sudo[116071]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:28 compute-0 ceph-mon[74207]: 6.b scrub starts
Nov 25 09:37:28 compute-0 ceph-mon[74207]: 6.b scrub ok
Nov 25 09:37:28 compute-0 ceph-mon[74207]: 6.1 scrub starts
Nov 25 09:37:28 compute-0 ceph-mon[74207]: 8.19 scrub starts
Nov 25 09:37:28 compute-0 ceph-mon[74207]: 6.1 scrub ok
Nov 25 09:37:28 compute-0 ceph-mon[74207]: 8.19 scrub ok
Nov 25 09:37:28 compute-0 ceph-mon[74207]: pgmap v161: 337 pgs: 1 unknown, 1 active+remapped, 335 active+clean; 459 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:28 compute-0 ceph-mon[74207]: osdmap e120: 3 total, 3 up, 3 in
Nov 25 09:37:28 compute-0 sudo[116360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmvweyypgigtpbfrjihulizcamdbnhao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063448.2249637-326-232476714948224/AnsiballZ_file.py'
Nov 25 09:37:28 compute-0 sudo[116360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:28.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:28 compute-0 python3.9[116362]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 09:37:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 25 09:37:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 25 09:37:28 compute-0 sudo[116360]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:28 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 25 09:37:28 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=120/121 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:29 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 25 09:37:29 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 25 09:37:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v165: 337 pgs: 1 unknown, 1 active+remapped, 335 active+clean; 459 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:29 compute-0 ceph-mon[74207]: 6.9 scrub starts
Nov 25 09:37:29 compute-0 ceph-mon[74207]: 9.9 scrub starts
Nov 25 09:37:29 compute-0 ceph-mon[74207]: 6.3 scrub starts
Nov 25 09:37:29 compute-0 ceph-mon[74207]: 6.9 scrub ok
Nov 25 09:37:29 compute-0 ceph-mon[74207]: 6.3 scrub ok
Nov 25 09:37:29 compute-0 ceph-mon[74207]: 9.9 scrub ok
Nov 25 09:37:29 compute-0 ceph-mon[74207]: osdmap e121: 3 total, 3 up, 3 in
Nov 25 09:37:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:29.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:29 compute-0 python3.9[116512]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:37:29 compute-0 sudo[116665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sknulbkbocdzwcgtmbrmhsyuwjpdqszz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063449.5044587-374-39720680380933/AnsiballZ_dnf.py'
Nov 25 09:37:29 compute-0 sudo[116665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:29 compute-0 python3.9[116667]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:37:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:37:30 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 25 09:37:30 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 25 09:37:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:37:30] "GET /metrics HTTP/1.1" 200 48357 "" "Prometheus/2.51.0"
Nov 25 09:37:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:37:30] "GET /metrics HTTP/1.1" 200 48357 "" "Prometheus/2.51.0"
Nov 25 09:37:30 compute-0 ceph-mon[74207]: 9.17 scrub starts
Nov 25 09:37:30 compute-0 ceph-mon[74207]: 6.2 scrub starts
Nov 25 09:37:30 compute-0 ceph-mon[74207]: 9.17 scrub ok
Nov 25 09:37:30 compute-0 ceph-mon[74207]: 6.2 scrub ok
Nov 25 09:37:30 compute-0 ceph-mon[74207]: 6.c scrub starts
Nov 25 09:37:30 compute-0 ceph-mon[74207]: 6.c scrub ok
Nov 25 09:37:30 compute-0 ceph-mon[74207]: pgmap v165: 337 pgs: 1 unknown, 1 active+remapped, 335 active+clean; 459 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:37:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:30.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:30 compute-0 sudo[116665]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:31 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Nov 25 09:37:31 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Nov 25 09:37:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Nov 25 09:37:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 25 09:37:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 25 09:37:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 25 09:37:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 25 09:37:31 compute-0 ceph-mon[74207]: 6.5 scrub starts
Nov 25 09:37:31 compute-0 ceph-mon[74207]: 9.16 scrub starts
Nov 25 09:37:31 compute-0 ceph-mon[74207]: 6.5 scrub ok
Nov 25 09:37:31 compute-0 ceph-mon[74207]: 9.16 scrub ok
Nov 25 09:37:31 compute-0 ceph-mon[74207]: 6.f scrub starts
Nov 25 09:37:31 compute-0 ceph-mon[74207]: 6.f scrub ok
Nov 25 09:37:31 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 25 09:37:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:31.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:31 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 25 09:37:31 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:31 compute-0 sudo[116819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvhjloxjnamqryrrksegkadyihghnzkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063451.159612-401-254835885659416/AnsiballZ_dnf.py'
Nov 25 09:37:31 compute-0 sudo[116819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:31 compute-0 python3.9[116821]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:32 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 25 09:37:32 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 25 09:37:32 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 2.
Nov 25 09:37:32 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:37:32 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:37:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 25 09:37:32 compute-0 ceph-mon[74207]: 9.3 scrub starts
Nov 25 09:37:32 compute-0 ceph-mon[74207]: 6.a scrub starts
Nov 25 09:37:32 compute-0 ceph-mon[74207]: 6.a scrub ok
Nov 25 09:37:32 compute-0 ceph-mon[74207]: 9.3 scrub ok
Nov 25 09:37:32 compute-0 ceph-mon[74207]: 9.14 deep-scrub starts
Nov 25 09:37:32 compute-0 ceph-mon[74207]: 9.14 deep-scrub ok
Nov 25 09:37:32 compute-0 ceph-mon[74207]: pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:32 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 25 09:37:32 compute-0 ceph-mon[74207]: osdmap e122: 3 total, 3 up, 3 in
Nov 25 09:37:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 25 09:37:32 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 25 09:37:32 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:32 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:32 compute-0 podman[116863]: 2025-11-25 09:37:32.332057506 +0000 UTC m=+0.028470308 container create 9671431d2cc7041cad268f763e1621afb7848bcaff0c37f8ccb22e9fe6f05c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67eda61695ba4816aca781e0108f8fc6e5c458892520d2a89c4ab81b3bc6ead/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67eda61695ba4816aca781e0108f8fc6e5c458892520d2a89c4ab81b3bc6ead/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67eda61695ba4816aca781e0108f8fc6e5c458892520d2a89c4ab81b3bc6ead/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67eda61695ba4816aca781e0108f8fc6e5c458892520d2a89c4ab81b3bc6ead/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:32 compute-0 podman[116863]: 2025-11-25 09:37:32.37366838 +0000 UTC m=+0.070081192 container init 9671431d2cc7041cad268f763e1621afb7848bcaff0c37f8ccb22e9fe6f05c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 09:37:32 compute-0 podman[116863]: 2025-11-25 09:37:32.378711461 +0000 UTC m=+0.075124263 container start 9671431d2cc7041cad268f763e1621afb7848bcaff0c37f8ccb22e9fe6f05c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:37:32 compute-0 bash[116863]: 9671431d2cc7041cad268f763e1621afb7848bcaff0c37f8ccb22e9fe6f05c2c
Nov 25 09:37:32 compute-0 podman[116863]: 2025-11-25 09:37:32.320810416 +0000 UTC m=+0.017223229 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:37:32 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:37:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:32 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:37:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:32 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:37:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:32 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:37:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:32 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:37:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:32 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:37:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:32 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:37:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:32 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:37:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:32 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:37:32 compute-0 sudo[116819]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:32.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:33 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 25 09:37:33 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 25 09:37:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Nov 25 09:37:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 25 09:37:33 compute-0 sudo[117066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjegeguyptnoqjvpifnwjazyqslevxdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063452.9841466-437-86893586122656/AnsiballZ_stat.py'
Nov 25 09:37:33 compute-0 sudo[117066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:33.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 25 09:37:33 compute-0 ceph-mon[74207]: 9.8 scrub starts
Nov 25 09:37:33 compute-0 ceph-mon[74207]: 6.7 scrub starts
Nov 25 09:37:33 compute-0 ceph-mon[74207]: 6.7 scrub ok
Nov 25 09:37:33 compute-0 ceph-mon[74207]: 9.8 scrub ok
Nov 25 09:37:33 compute-0 ceph-mon[74207]: 9.c scrub starts
Nov 25 09:37:33 compute-0 ceph-mon[74207]: 9.c scrub ok
Nov 25 09:37:33 compute-0 ceph-mon[74207]: osdmap e123: 3 total, 3 up, 3 in
Nov 25 09:37:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 25 09:37:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 25 09:37:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 25 09:37:33 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 25 09:37:33 compute-0 python3.9[117068]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:37:33 compute-0 sudo[117066]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:33 compute-0 sudo[117221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzoxhcrxsultjwlnkuxdsdiymtmoroxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063453.4994524-461-49371821938866/AnsiballZ_slurp.py'
Nov 25 09:37:33 compute-0 sudo[117221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:33 compute-0 python3.9[117223]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 25 09:37:33 compute-0 sudo[117221]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:34 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Nov 25 09:37:34 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Nov 25 09:37:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 25 09:37:34 compute-0 ceph-mon[74207]: 6.d scrub starts
Nov 25 09:37:34 compute-0 ceph-mon[74207]: 6.d scrub ok
Nov 25 09:37:34 compute-0 ceph-mon[74207]: 9.b scrub starts
Nov 25 09:37:34 compute-0 ceph-mon[74207]: 9.b scrub ok
Nov 25 09:37:34 compute-0 ceph-mon[74207]: 9.2 scrub starts
Nov 25 09:37:34 compute-0 ceph-mon[74207]: 9.2 scrub ok
Nov 25 09:37:34 compute-0 ceph-mon[74207]: pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:34 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 25 09:37:34 compute-0 ceph-mon[74207]: osdmap e124: 3 total, 3 up, 3 in
Nov 25 09:37:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 25 09:37:34 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 25 09:37:34 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:34 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:34.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:35 compute-0 sshd-session[114384]: Connection closed by 192.168.122.30 port 54406
Nov 25 09:37:35 compute-0 sshd-session[114381]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:37:35 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 25 09:37:35 compute-0 systemd[1]: session-40.scope: Consumed 13.061s CPU time.
Nov 25 09:37:35 compute-0 systemd-logind[744]: Session 40 logged out. Waiting for processes to exit.
Nov 25 09:37:35 compute-0 systemd-logind[744]: Removed session 40.
Nov 25 09:37:35 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 25 09:37:35 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 25 09:37:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Nov 25 09:37:35 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 25 09:37:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:35.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 25 09:37:35 compute-0 ceph-mon[74207]: 6.e scrub starts
Nov 25 09:37:35 compute-0 ceph-mon[74207]: 6.e scrub ok
Nov 25 09:37:35 compute-0 ceph-mon[74207]: 9.7 scrub starts
Nov 25 09:37:35 compute-0 ceph-mon[74207]: 9.7 scrub ok
Nov 25 09:37:35 compute-0 ceph-mon[74207]: 9.0 scrub starts
Nov 25 09:37:35 compute-0 ceph-mon[74207]: 9.0 scrub ok
Nov 25 09:37:35 compute-0 ceph-mon[74207]: osdmap e125: 3 total, 3 up, 3 in
Nov 25 09:37:35 compute-0 ceph-mon[74207]: 6.8 scrub starts
Nov 25 09:37:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 25 09:37:35 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 25 09:37:35 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 25 09:37:35 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 25 09:37:35 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=125/126 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:36 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 25 09:37:36 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 25 09:37:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 25 09:37:36 compute-0 ceph-mon[74207]: 6.8 scrub ok
Nov 25 09:37:36 compute-0 ceph-mon[74207]: 9.5 scrub starts
Nov 25 09:37:36 compute-0 ceph-mon[74207]: 9.5 scrub ok
Nov 25 09:37:36 compute-0 ceph-mon[74207]: 9.1 scrub starts
Nov 25 09:37:36 compute-0 ceph-mon[74207]: 9.1 scrub ok
Nov 25 09:37:36 compute-0 ceph-mon[74207]: pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:36 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 25 09:37:36 compute-0 ceph-mon[74207]: osdmap e126: 3 total, 3 up, 3 in
Nov 25 09:37:36 compute-0 ceph-mon[74207]: 9.10 scrub starts
Nov 25 09:37:36 compute-0 ceph-mon[74207]: 9.10 scrub ok
Nov 25 09:37:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 25 09:37:36 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 25 09:37:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:37:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:37:36 compute-0 sudo[117251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:37:36 compute-0 sudo[117251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:36 compute-0 sudo[117251]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:36.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:36.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:36.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:36.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:36 compute-0 sudo[117276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:37:36 compute-0 sudo[117276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:37 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 25 09:37:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v175: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:37 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 25 09:37:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:37.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 25 09:37:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 25 09:37:37 compute-0 ceph-mon[74207]: 9.18 scrub starts
Nov 25 09:37:37 compute-0 ceph-mon[74207]: 9.18 scrub ok
Nov 25 09:37:37 compute-0 ceph-mon[74207]: 9.4 scrub starts
Nov 25 09:37:37 compute-0 ceph-mon[74207]: 9.4 scrub ok
Nov 25 09:37:37 compute-0 ceph-mon[74207]: osdmap e127: 3 total, 3 up, 3 in
Nov 25 09:37:37 compute-0 ceph-mon[74207]: 9.11 scrub starts
Nov 25 09:37:37 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 25 09:37:37 compute-0 podman[117358]: 2025-11-25 09:37:37.371924877 +0000 UTC m=+0.044101748 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:37:37 compute-0 podman[117358]: 2025-11-25 09:37:37.449097948 +0000 UTC m=+0.121274809 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:37:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:37 compute-0 podman[117471]: 2025-11-25 09:37:37.79713168 +0000 UTC m=+0.034062533 container exec e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:37:37 compute-0 podman[117471]: 2025-11-25 09:37:37.802047752 +0000 UTC m=+0.038978586 container exec_died e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:37:37 compute-0 podman[117543]: 2025-11-25 09:37:37.992241692 +0000 UTC m=+0.034503655 container exec 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:37:38 compute-0 podman[117543]: 2025-11-25 09:37:38.011045668 +0000 UTC m=+0.053307622 container exec_died 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:37:38 compute-0 podman[117599]: 2025-11-25 09:37:38.157790258 +0000 UTC m=+0.037730333 container exec c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:37:38 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 25 09:37:38 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 25 09:37:38 compute-0 podman[117599]: 2025-11-25 09:37:38.276517607 +0000 UTC m=+0.156457682 container exec_died c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:37:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 25 09:37:38 compute-0 ceph-mon[74207]: 9.13 deep-scrub starts
Nov 25 09:37:38 compute-0 ceph-mon[74207]: 9.11 scrub ok
Nov 25 09:37:38 compute-0 ceph-mon[74207]: 9.13 deep-scrub ok
Nov 25 09:37:38 compute-0 ceph-mon[74207]: 9.1c scrub starts
Nov 25 09:37:38 compute-0 ceph-mon[74207]: pgmap v175: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:38 compute-0 ceph-mon[74207]: 9.1c scrub ok
Nov 25 09:37:38 compute-0 ceph-mon[74207]: osdmap e128: 3 total, 3 up, 3 in
Nov 25 09:37:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 25 09:37:38 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 25 09:37:38 compute-0 podman[117657]: 2025-11-25 09:37:38.42003613 +0000 UTC m=+0.035310014 container exec e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:37:38 compute-0 podman[117657]: 2025-11-25 09:37:38.427109076 +0000 UTC m=+0.042382959 container exec_died e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:37:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:38 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:37:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:38 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:37:38 compute-0 podman[117709]: 2025-11-25 09:37:38.570991795 +0000 UTC m=+0.034190895 container exec 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.buildah.version=1.28.2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64)
Nov 25 09:37:38 compute-0 podman[117709]: 2025-11-25 09:37:38.579053203 +0000 UTC m=+0.042252283 container exec_died 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, release=1793, architecture=x86_64, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, vendor=Red Hat, Inc., version=2.2.4, build-date=2023-02-22T09:23:20, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph)
Nov 25 09:37:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:38.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:38 compute-0 podman[117760]: 2025-11-25 09:37:38.719622951 +0000 UTC m=+0.033306339 container exec 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:37:38 compute-0 podman[117760]: 2025-11-25 09:37:38.7450954 +0000 UTC m=+0.058778789 container exec_died 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:37:38 compute-0 podman[117807]: 2025-11-25 09:37:38.855885625 +0000 UTC m=+0.033779560 container exec 9671431d2cc7041cad268f763e1621afb7848bcaff0c37f8ccb22e9fe6f05c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:37:38 compute-0 podman[117807]: 2025-11-25 09:37:38.868083565 +0000 UTC m=+0.045977491 container exec_died 9671431d2cc7041cad268f763e1621afb7848bcaff0c37f8ccb22e9fe6f05c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:37:39 compute-0 sudo[117276]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:37:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:37:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:39 compute-0 sudo[117863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:37:39 compute-0 sudo[117863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:39 compute-0 sudo[117863]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:39 compute-0 sudo[117888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:39 compute-0 sudo[117888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:39 compute-0 sudo[117888]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:39 compute-0 sudo[117891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:37:39 compute-0 sudo[117891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v178: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:39 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.19 deep-scrub starts
Nov 25 09:37:39 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.19 deep-scrub ok
Nov 25 09:37:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:39.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 25 09:37:39 compute-0 ceph-mon[74207]: 9.1a scrub starts
Nov 25 09:37:39 compute-0 ceph-mon[74207]: 9.1a scrub ok
Nov 25 09:37:39 compute-0 ceph-mon[74207]: osdmap e129: 3 total, 3 up, 3 in
Nov 25 09:37:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 25 09:37:39 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 25 09:37:39 compute-0 sudo[117891]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:37:39 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:37:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:37:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:37:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:37:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:37:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:37:39 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:37:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:37:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:37:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:37:39 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:37:39 compute-0 sudo[117967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:37:39 compute-0 sudo[117967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:39 compute-0 sudo[117967]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:39 compute-0 sudo[117992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:37:39 compute-0 sudo[117992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:39 compute-0 podman[118049]: 2025-11-25 09:37:39.894953462 +0000 UTC m=+0.030398951 container create 89c95ca948d35feb17ebaf8a054d98e895c7246829f8ab3529dbd03a21627f30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kowalevski, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:37:39 compute-0 systemd[1]: Started libpod-conmon-89c95ca948d35feb17ebaf8a054d98e895c7246829f8ab3529dbd03a21627f30.scope.
Nov 25 09:37:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:37:39 compute-0 podman[118049]: 2025-11-25 09:37:39.945367049 +0000 UTC m=+0.080812548 container init 89c95ca948d35feb17ebaf8a054d98e895c7246829f8ab3529dbd03a21627f30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kowalevski, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:37:39 compute-0 podman[118049]: 2025-11-25 09:37:39.950420569 +0000 UTC m=+0.085866059 container start 89c95ca948d35feb17ebaf8a054d98e895c7246829f8ab3529dbd03a21627f30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:37:39 compute-0 podman[118049]: 2025-11-25 09:37:39.952341649 +0000 UTC m=+0.087787138 container attach 89c95ca948d35feb17ebaf8a054d98e895c7246829f8ab3529dbd03a21627f30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kowalevski, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:37:39 compute-0 sweet_kowalevski[118063]: 167 167
Nov 25 09:37:39 compute-0 systemd[1]: libpod-89c95ca948d35feb17ebaf8a054d98e895c7246829f8ab3529dbd03a21627f30.scope: Deactivated successfully.
Nov 25 09:37:39 compute-0 podman[118049]: 2025-11-25 09:37:39.954523009 +0000 UTC m=+0.089968498 container died 89c95ca948d35feb17ebaf8a054d98e895c7246829f8ab3529dbd03a21627f30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kowalevski, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-877cbec947d08fea185bba82e940799341635a585f8ca86d4e135d6f40ec290c-merged.mount: Deactivated successfully.
Nov 25 09:37:39 compute-0 podman[118049]: 2025-11-25 09:37:39.974811732 +0000 UTC m=+0.110257221 container remove 89c95ca948d35feb17ebaf8a054d98e895c7246829f8ab3529dbd03a21627f30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kowalevski, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:37:39 compute-0 podman[118049]: 2025-11-25 09:37:39.882347503 +0000 UTC m=+0.017793012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:37:39 compute-0 systemd[1]: libpod-conmon-89c95ca948d35feb17ebaf8a054d98e895c7246829f8ab3529dbd03a21627f30.scope: Deactivated successfully.
Nov 25 09:37:40 compute-0 podman[118085]: 2025-11-25 09:37:40.091923737 +0000 UTC m=+0.028256315 container create 5042d09318922b4f283cf30730f57da7c271071a72bdad33d85211e36f51277c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_leavitt, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 09:37:40 compute-0 systemd[1]: Started libpod-conmon-5042d09318922b4f283cf30730f57da7c271071a72bdad33d85211e36f51277c.scope.
Nov 25 09:37:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c18fa2162a4fd0b93a037ebffe7a8dbd375bd8151f1ae9d4b0f0459f36bcaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c18fa2162a4fd0b93a037ebffe7a8dbd375bd8151f1ae9d4b0f0459f36bcaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c18fa2162a4fd0b93a037ebffe7a8dbd375bd8151f1ae9d4b0f0459f36bcaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c18fa2162a4fd0b93a037ebffe7a8dbd375bd8151f1ae9d4b0f0459f36bcaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c18fa2162a4fd0b93a037ebffe7a8dbd375bd8151f1ae9d4b0f0459f36bcaf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:40 compute-0 podman[118085]: 2025-11-25 09:37:40.154878922 +0000 UTC m=+0.091211499 container init 5042d09318922b4f283cf30730f57da7c271071a72bdad33d85211e36f51277c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_leavitt, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:37:40 compute-0 podman[118085]: 2025-11-25 09:37:40.161359782 +0000 UTC m=+0.097692358 container start 5042d09318922b4f283cf30730f57da7c271071a72bdad33d85211e36f51277c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 09:37:40 compute-0 podman[118085]: 2025-11-25 09:37:40.162586713 +0000 UTC m=+0.098919290 container attach 5042d09318922b4f283cf30730f57da7c271071a72bdad33d85211e36f51277c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_leavitt, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 25 09:37:40 compute-0 podman[118085]: 2025-11-25 09:37:40.081456447 +0000 UTC m=+0.017789044 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:37:40 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1b deep-scrub starts
Nov 25 09:37:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:37:40] "GET /metrics HTTP/1.1" 200 48357 "" "Prometheus/2.51.0"
Nov 25 09:37:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:37:40] "GET /metrics HTTP/1.1" 200 48357 "" "Prometheus/2.51.0"
Nov 25 09:37:40 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1b deep-scrub ok
Nov 25 09:37:40 compute-0 ceph-mon[74207]: pgmap v178: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:40 compute-0 ceph-mon[74207]: 9.19 deep-scrub starts
Nov 25 09:37:40 compute-0 ceph-mon[74207]: 9.19 deep-scrub ok
Nov 25 09:37:40 compute-0 ceph-mon[74207]: osdmap e130: 3 total, 3 up, 3 in
Nov 25 09:37:40 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:37:40 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:37:40 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:40 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:40 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:37:40 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:37:40 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:37:40 compute-0 boring_leavitt[118099]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:37:40 compute-0 boring_leavitt[118099]: --> All data devices are unavailable
Nov 25 09:37:40 compute-0 systemd[1]: libpod-5042d09318922b4f283cf30730f57da7c271071a72bdad33d85211e36f51277c.scope: Deactivated successfully.
Nov 25 09:37:40 compute-0 podman[118085]: 2025-11-25 09:37:40.433133824 +0000 UTC m=+0.369466401 container died 5042d09318922b4f283cf30730f57da7c271071a72bdad33d85211e36f51277c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:37:40 compute-0 sshd-session[118110]: Accepted publickey for zuul from 192.168.122.30 port 37536 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:37:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-17c18fa2162a4fd0b93a037ebffe7a8dbd375bd8151f1ae9d4b0f0459f36bcaf-merged.mount: Deactivated successfully.
Nov 25 09:37:40 compute-0 systemd-logind[744]: New session 41 of user zuul.
Nov 25 09:37:40 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 25 09:37:40 compute-0 podman[118085]: 2025-11-25 09:37:40.45939509 +0000 UTC m=+0.395727667 container remove 5042d09318922b4f283cf30730f57da7c271071a72bdad33d85211e36f51277c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_leavitt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:37:40 compute-0 sshd-session[118110]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:37:40 compute-0 systemd[1]: libpod-conmon-5042d09318922b4f283cf30730f57da7c271071a72bdad33d85211e36f51277c.scope: Deactivated successfully.
Nov 25 09:37:40 compute-0 sudo[117992]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:40 compute-0 sudo[118128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:37:40 compute-0 sudo[118128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:40 compute-0 sudo[118128]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:40 compute-0 sudo[118176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:37:40 compute-0 sudo[118176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:40.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:40 compute-0 podman[118261]: 2025-11-25 09:37:40.860696497 +0000 UTC m=+0.026084975 container create 18635f02d88f1673162439b9484c3c245f9627af9f458309295919eb52cc7995 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lederberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:37:40 compute-0 systemd[1]: Started libpod-conmon-18635f02d88f1673162439b9484c3c245f9627af9f458309295919eb52cc7995.scope.
Nov 25 09:37:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:37:40 compute-0 podman[118261]: 2025-11-25 09:37:40.916618812 +0000 UTC m=+0.082007310 container init 18635f02d88f1673162439b9484c3c245f9627af9f458309295919eb52cc7995 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lederberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:37:40 compute-0 podman[118261]: 2025-11-25 09:37:40.921357049 +0000 UTC m=+0.086745526 container start 18635f02d88f1673162439b9484c3c245f9627af9f458309295919eb52cc7995 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 09:37:40 compute-0 podman[118261]: 2025-11-25 09:37:40.922405054 +0000 UTC m=+0.087793551 container attach 18635f02d88f1673162439b9484c3c245f9627af9f458309295919eb52cc7995 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:37:40 compute-0 busy_lederberg[118298]: 167 167
Nov 25 09:37:40 compute-0 systemd[1]: libpod-18635f02d88f1673162439b9484c3c245f9627af9f458309295919eb52cc7995.scope: Deactivated successfully.
Nov 25 09:37:40 compute-0 conmon[118298]: conmon 18635f02d88f16731624 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18635f02d88f1673162439b9484c3c245f9627af9f458309295919eb52cc7995.scope/container/memory.events
Nov 25 09:37:40 compute-0 podman[118261]: 2025-11-25 09:37:40.925640369 +0000 UTC m=+0.091028846 container died 18635f02d88f1673162439b9484c3c245f9627af9f458309295919eb52cc7995 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:37:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6f28ac02271871f590ff6922c14454eef6237a478d34d66348ff877a919ac56-merged.mount: Deactivated successfully.
Nov 25 09:37:40 compute-0 podman[118261]: 2025-11-25 09:37:40.943701687 +0000 UTC m=+0.109090163 container remove 18635f02d88f1673162439b9484c3c245f9627af9f458309295919eb52cc7995 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:37:40 compute-0 podman[118261]: 2025-11-25 09:37:40.84998609 +0000 UTC m=+0.015374577 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:37:40 compute-0 systemd[1]: libpod-conmon-18635f02d88f1673162439b9484c3c245f9627af9f458309295919eb52cc7995.scope: Deactivated successfully.
Nov 25 09:37:41 compute-0 podman[118368]: 2025-11-25 09:37:41.062573586 +0000 UTC m=+0.029438912 container create 7e17573691567f27ea1d5050068635b3c04e75ea6fa1918e500aba3ee101ac62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:37:41 compute-0 systemd[1]: Started libpod-conmon-7e17573691567f27ea1d5050068635b3c04e75ea6fa1918e500aba3ee101ac62.scope.
Nov 25 09:37:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8182216171fe3de1d100d2e6dced70018ba61c264268f820c25ccf8159fff81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8182216171fe3de1d100d2e6dced70018ba61c264268f820c25ccf8159fff81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8182216171fe3de1d100d2e6dced70018ba61c264268f820c25ccf8159fff81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8182216171fe3de1d100d2e6dced70018ba61c264268f820c25ccf8159fff81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:41 compute-0 podman[118368]: 2025-11-25 09:37:41.124120399 +0000 UTC m=+0.090985735 container init 7e17573691567f27ea1d5050068635b3c04e75ea6fa1918e500aba3ee101ac62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_banach, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:37:41 compute-0 podman[118368]: 2025-11-25 09:37:41.128538372 +0000 UTC m=+0.095403708 container start 7e17573691567f27ea1d5050068635b3c04e75ea6fa1918e500aba3ee101ac62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_banach, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:37:41 compute-0 podman[118368]: 2025-11-25 09:37:41.130358872 +0000 UTC m=+0.097224198 container attach 7e17573691567f27ea1d5050068635b3c04e75ea6fa1918e500aba3ee101ac62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:37:41 compute-0 podman[118368]: 2025-11-25 09:37:41.050857003 +0000 UTC m=+0.017722349 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:37:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Nov 25 09:37:41 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 25 09:37:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:41.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:41 compute-0 python3.9[118403]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:37:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 25 09:37:41 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 25 09:37:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 25 09:37:41 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 25 09:37:41 compute-0 ceph-mon[74207]: 9.1d scrub starts
Nov 25 09:37:41 compute-0 ceph-mon[74207]: 9.1d scrub ok
Nov 25 09:37:41 compute-0 ceph-mon[74207]: 9.1b deep-scrub starts
Nov 25 09:37:41 compute-0 ceph-mon[74207]: 9.1b deep-scrub ok
Nov 25 09:37:41 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 25 09:37:41 compute-0 pedantic_banach[118407]: {
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:     "1": [
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:         {
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             "devices": [
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "/dev/loop3"
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             ],
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             "lv_name": "ceph_lv0",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             "lv_size": "21470642176",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             "name": "ceph_lv0",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             "tags": {
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.cluster_name": "ceph",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.crush_device_class": "",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.encrypted": "0",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.osd_id": "1",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.type": "block",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.vdo": "0",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:                 "ceph.with_tpm": "0"
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             },
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             "type": "block",
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:             "vg_name": "ceph_vg0"
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:         }
Nov 25 09:37:41 compute-0 pedantic_banach[118407]:     ]
Nov 25 09:37:41 compute-0 pedantic_banach[118407]: }
Nov 25 09:37:41 compute-0 systemd[1]: libpod-7e17573691567f27ea1d5050068635b3c04e75ea6fa1918e500aba3ee101ac62.scope: Deactivated successfully.
Nov 25 09:37:41 compute-0 podman[118368]: 2025-11-25 09:37:41.35628046 +0000 UTC m=+0.323145786 container died 7e17573691567f27ea1d5050068635b3c04e75ea6fa1918e500aba3ee101ac62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_banach, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8182216171fe3de1d100d2e6dced70018ba61c264268f820c25ccf8159fff81-merged.mount: Deactivated successfully.
Nov 25 09:37:41 compute-0 podman[118368]: 2025-11-25 09:37:41.379384347 +0000 UTC m=+0.346249673 container remove 7e17573691567f27ea1d5050068635b3c04e75ea6fa1918e500aba3ee101ac62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_banach, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:37:41 compute-0 systemd[1]: libpod-conmon-7e17573691567f27ea1d5050068635b3c04e75ea6fa1918e500aba3ee101ac62.scope: Deactivated successfully.
Nov 25 09:37:41 compute-0 sudo[118176]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:41 compute-0 sudo[118429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:37:41 compute-0 sudo[118429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:41 compute-0 sudo[118429]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:41 compute-0 sudo[118454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:37:41 compute-0 sudo[118454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:41 compute-0 podman[118611]: 2025-11-25 09:37:41.795620311 +0000 UTC m=+0.028284127 container create 3a4cb8b520b7d1c33c852798951514ebad16be4622de05bcca090f8c3f96f306 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 09:37:41 compute-0 systemd[1]: Started libpod-conmon-3a4cb8b520b7d1c33c852798951514ebad16be4622de05bcca090f8c3f96f306.scope.
Nov 25 09:37:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:37:41 compute-0 podman[118611]: 2025-11-25 09:37:41.845510903 +0000 UTC m=+0.078174729 container init 3a4cb8b520b7d1c33c852798951514ebad16be4622de05bcca090f8c3f96f306 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_lamarr, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:37:41 compute-0 podman[118611]: 2025-11-25 09:37:41.850387259 +0000 UTC m=+0.083051065 container start 3a4cb8b520b7d1c33c852798951514ebad16be4622de05bcca090f8c3f96f306 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_lamarr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:37:41 compute-0 podman[118611]: 2025-11-25 09:37:41.851800021 +0000 UTC m=+0.084463827 container attach 3a4cb8b520b7d1c33c852798951514ebad16be4622de05bcca090f8c3f96f306 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:37:41 compute-0 confident_lamarr[118648]: 167 167
Nov 25 09:37:41 compute-0 systemd[1]: libpod-3a4cb8b520b7d1c33c852798951514ebad16be4622de05bcca090f8c3f96f306.scope: Deactivated successfully.
Nov 25 09:37:41 compute-0 podman[118611]: 2025-11-25 09:37:41.853157438 +0000 UTC m=+0.085821245 container died 3a4cb8b520b7d1c33c852798951514ebad16be4622de05bcca090f8c3f96f306 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_lamarr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-69d61877a5bfd6efc09fb18d03172b8329b1acabee0746b43556db25e01cc92e-merged.mount: Deactivated successfully.
Nov 25 09:37:41 compute-0 podman[118611]: 2025-11-25 09:37:41.870627341 +0000 UTC m=+0.103291147 container remove 3a4cb8b520b7d1c33c852798951514ebad16be4622de05bcca090f8c3f96f306 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:37:41 compute-0 podman[118611]: 2025-11-25 09:37:41.784445147 +0000 UTC m=+0.017108973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:37:41 compute-0 systemd[1]: libpod-conmon-3a4cb8b520b7d1c33c852798951514ebad16be4622de05bcca090f8c3f96f306.scope: Deactivated successfully.
Nov 25 09:37:41 compute-0 podman[118697]: 2025-11-25 09:37:41.984321846 +0000 UTC m=+0.028920436 container create 616335be01a2a10ef45feee265a43679bc0734a6c248f6617216cb6bb1477040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatterjee, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 25 09:37:42 compute-0 systemd[1]: Started libpod-conmon-616335be01a2a10ef45feee265a43679bc0734a6c248f6617216cb6bb1477040.scope.
Nov 25 09:37:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d92480a8e4f237d86922afc44d0d7de6e0da4bf134e3cd281c681ac2bfddab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d92480a8e4f237d86922afc44d0d7de6e0da4bf134e3cd281c681ac2bfddab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d92480a8e4f237d86922afc44d0d7de6e0da4bf134e3cd281c681ac2bfddab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d92480a8e4f237d86922afc44d0d7de6e0da4bf134e3cd281c681ac2bfddab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:37:42 compute-0 podman[118697]: 2025-11-25 09:37:42.0446482 +0000 UTC m=+0.089246809 container init 616335be01a2a10ef45feee265a43679bc0734a6c248f6617216cb6bb1477040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatterjee, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:37:42 compute-0 podman[118697]: 2025-11-25 09:37:42.050622194 +0000 UTC m=+0.095220784 container start 616335be01a2a10ef45feee265a43679bc0734a6c248f6617216cb6bb1477040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:37:42 compute-0 podman[118697]: 2025-11-25 09:37:42.053919136 +0000 UTC m=+0.098517724 container attach 616335be01a2a10ef45feee265a43679bc0734a6c248f6617216cb6bb1477040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatterjee, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:37:42 compute-0 podman[118697]: 2025-11-25 09:37:41.972528749 +0000 UTC m=+0.017127359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:37:42 compute-0 python3.9[118681]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:37:42 compute-0 ceph-mon[74207]: pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:42 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 25 09:37:42 compute-0 ceph-mon[74207]: osdmap e131: 3 total, 3 up, 3 in
Nov 25 09:37:42 compute-0 lvm[118852]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:37:42 compute-0 lvm[118852]: VG ceph_vg0 finished
Nov 25 09:37:42 compute-0 frosty_chatterjee[118710]: {}
Nov 25 09:37:42 compute-0 systemd[1]: libpod-616335be01a2a10ef45feee265a43679bc0734a6c248f6617216cb6bb1477040.scope: Deactivated successfully.
Nov 25 09:37:42 compute-0 podman[118865]: 2025-11-25 09:37:42.575771547 +0000 UTC m=+0.017642138 container died 616335be01a2a10ef45feee265a43679bc0734a6c248f6617216cb6bb1477040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatterjee, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-20d92480a8e4f237d86922afc44d0d7de6e0da4bf134e3cd281c681ac2bfddab-merged.mount: Deactivated successfully.
Nov 25 09:37:42 compute-0 podman[118865]: 2025-11-25 09:37:42.597734946 +0000 UTC m=+0.039605526 container remove 616335be01a2a10ef45feee265a43679bc0734a6c248f6617216cb6bb1477040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 25 09:37:42 compute-0 systemd[1]: libpod-conmon-616335be01a2a10ef45feee265a43679bc0734a6c248f6617216cb6bb1477040.scope: Deactivated successfully.
Nov 25 09:37:42 compute-0 sudo[118454]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:37:42 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:37:42 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:42.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:42 compute-0 sudo[118919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:37:42 compute-0 sudo[118919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:42 compute-0 sudo[118919]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:43 compute-0 python3.9[119017]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:37:43 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 25 09:37:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:37:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:43.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:43 compute-0 sshd-session[118127]: Connection closed by 192.168.122.30 port 37536
Nov 25 09:37:43 compute-0 sshd-session[118110]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:37:43 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 25 09:37:43 compute-0 systemd[1]: session-41.scope: Consumed 1.681s CPU time.
Nov 25 09:37:43 compute-0 systemd-logind[744]: Session 41 logged out. Waiting for processes to exit.
Nov 25 09:37:43 compute-0 systemd-logind[744]: Removed session 41.
Nov 25 09:37:43 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:43 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:37:43 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 09:37:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 25 09:37:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:37:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 25 09:37:43 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 25 09:37:43 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:43 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:43 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:37:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 25 09:37:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:44.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 25 09:37:44 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 25 09:37:44 compute-0 ceph-mon[74207]: 9.12 scrub starts
Nov 25 09:37:44 compute-0 ceph-mon[74207]: 9.12 scrub ok
Nov 25 09:37:44 compute-0 ceph-mon[74207]: pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 09:37:44 compute-0 ceph-mon[74207]: osdmap e132: 3 total, 3 up, 3 in
Nov 25 09:37:44 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:44 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:37:44
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.rgw.root', 'default.rgw.control', 'volumes', '.nfs', 'backups', 'images', 'cephfs.cephfs.data', '.mgr']
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:37:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:37:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:37:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:44 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd160000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:37:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:37:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:37:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:37:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:37:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:37:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:37:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:37:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:45.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:37:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 25 09:37:45 compute-0 ceph-mon[74207]: 9.15 scrub starts
Nov 25 09:37:45 compute-0 ceph-mon[74207]: 9.15 scrub ok
Nov 25 09:37:45 compute-0 ceph-mon[74207]: osdmap e133: 3 total, 3 up, 3 in
Nov 25 09:37:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:37:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 25 09:37:45 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 25 09:37:45 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:45 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:45 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd154001e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:46 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd1500014c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:37:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:46.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:37:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 25 09:37:46 compute-0 ceph-mon[74207]: 9.d scrub starts
Nov 25 09:37:46 compute-0 ceph-mon[74207]: 9.d scrub ok
Nov 25 09:37:46 compute-0 ceph-mon[74207]: pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:46 compute-0 ceph-mon[74207]: osdmap e134: 3 total, 3 up, 3 in
Nov 25 09:37:46 compute-0 ceph-mon[74207]: 9.f scrub starts
Nov 25 09:37:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 25 09:37:46 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 25 09:37:46 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 09:37:46 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 09:37:46 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=134/135 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:46.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:46.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:46.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:46.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093746 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:37:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:46 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd15c001ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:47 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 25 09:37:47 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 25 09:37:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v188: 337 pgs: 1 active+remapped, 1 peering, 1 active+clean+scrubbing, 334 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:47.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 25 09:37:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 25 09:37:47 compute-0 ceph-mon[74207]: 9.f scrub ok
Nov 25 09:37:47 compute-0 ceph-mon[74207]: osdmap e135: 3 total, 3 up, 3 in
Nov 25 09:37:47 compute-0 ceph-mon[74207]: 9.a deep-scrub starts
Nov 25 09:37:47 compute-0 ceph-mon[74207]: 9.1e scrub starts
Nov 25 09:37:47 compute-0 ceph-mon[74207]: 9.1e scrub ok
Nov 25 09:37:47 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 25 09:37:47 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=135/136 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 09:37:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:47 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd154002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:48 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 25 09:37:48 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 25 09:37:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:48 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd154002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:48 compute-0 sshd-session[119064]: Accepted publickey for zuul from 192.168.122.30 port 37552 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:37:48 compute-0 systemd-logind[744]: New session 42 of user zuul.
Nov 25 09:37:48 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 25 09:37:48 compute-0 sshd-session[119064]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:37:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:48.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:48 compute-0 ceph-mon[74207]: 9.a deep-scrub ok
Nov 25 09:37:48 compute-0 ceph-mon[74207]: pgmap v188: 337 pgs: 1 active+remapped, 1 peering, 1 active+clean+scrubbing, 334 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:48 compute-0 ceph-mon[74207]: osdmap e136: 3 total, 3 up, 3 in
Nov 25 09:37:48 compute-0 ceph-mon[74207]: 9.1f scrub starts
Nov 25 09:37:48 compute-0 ceph-mon[74207]: 9.1f scrub ok
Nov 25 09:37:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:48 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd15c001ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v190: 337 pgs: 1 active+remapped, 1 peering, 1 active+clean+scrubbing, 334 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:49 compute-0 python3.9[119217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:37:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:49.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:49 compute-0 ceph-mon[74207]: 9.e deep-scrub starts
Nov 25 09:37:49 compute-0 ceph-mon[74207]: 9.e deep-scrub ok
Nov 25 09:37:49 compute-0 ceph-mon[74207]: 9.6 scrub starts
Nov 25 09:37:49 compute-0 ceph-mon[74207]: 9.6 scrub ok
Nov 25 09:37:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:49 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd150001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:50 compute-0 python3.9[119372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:37:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:50 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd150001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:37:50] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:37:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:37:50] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:37:50 compute-0 sudo[119527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tybqhcdbhlyhqjzjjsegfwvnayylmiyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063470.437428-80-223609484616183/AnsiballZ_setup.py'
Nov 25 09:37:50 compute-0 sudo[119527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:50.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:50 compute-0 ceph-mon[74207]: pgmap v190: 337 pgs: 1 active+remapped, 1 peering, 1 active+clean+scrubbing, 334 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:50 compute-0 python3.9[119529]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:37:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:50 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd150001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:51 compute-0 sudo[119527]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:51.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:51 compute-0 sudo[119611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbrcedlhydbgugfbvkiyhhxxjgxviaht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063470.437428-80-223609484616183/AnsiballZ_dnf.py'
Nov 25 09:37:51 compute-0 sudo[119611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:51 compute-0 python3.9[119613]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:51 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd15c002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:52 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd150001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:52 compute-0 sudo[119611]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:37:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:52.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:37:52 compute-0 ceph-mon[74207]: pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:52 compute-0 sudo[119766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svjlnkoeiqfkvzckmucasdvbramnmwfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063472.7314715-116-53356743477181/AnsiballZ_setup.py'
Nov 25 09:37:52 compute-0 sudo[119766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:52 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd150001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:53 compute-0 python3.9[119768]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:37:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:37:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:53.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:37:53 compute-0 sudo[119766]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:53 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd154003620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:54 compute-0 sudo[119963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydnvjxgfhhfdwlkcobvyskdxopkafvuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063473.6760037-149-275865862862937/AnsiballZ_file.py'
Nov 25 09:37:54 compute-0 sudo[119963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:54 compute-0 python3.9[119965]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:37:54 compute-0 sudo[119963]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:54 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd154003620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:54 compute-0 sudo[120115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztzwsomdfuijjkfpckhdkbtobsrhuinp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063474.3207164-173-109138196560393/AnsiballZ_command.py'
Nov 25 09:37:54 compute-0 sudo[120115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:54.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:54 compute-0 ceph-mon[74207]: pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:54 compute-0 python3.9[120117]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:37:54 compute-0 sudo[120115]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:54 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd150001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:37:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:55.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:37:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:37:55 compute-0 sudo[120276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyuozsxmfegmxnhpcyestpykblildpcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063474.9898083-197-262449010902765/AnsiballZ_stat.py'
Nov 25 09:37:55 compute-0 sudo[120276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:55 compute-0 python3.9[120278]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:37:55 compute-0 sudo[120276]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:55 compute-0 sudo[120355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmpxdjotpxqscmimasxzrkxqbnakzubc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063474.9898083-197-262449010902765/AnsiballZ_file.py'
Nov 25 09:37:55 compute-0 sudo[120355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:55 compute-0 python3.9[120357]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:37:55 compute-0 sudo[120355]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:55 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd154004330 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:56 compute-0 sudo[120508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfzcjrxeheupcikxpzagyuvxmrryripy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063475.9913735-233-20188768511266/AnsiballZ_stat.py'
Nov 25 09:37:56 compute-0 sudo[120508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:56 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd15c002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:56 compute-0 python3.9[120510]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:37:56 compute-0 sudo[120508]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:56 compute-0 sudo[120586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ronrquqsblduuzkwfemlwhnttnjxytyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063475.9913735-233-20188768511266/AnsiballZ_file.py'
Nov 25 09:37:56 compute-0 sudo[120586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:56 compute-0 python3.9[120588]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:37:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:56.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:56 compute-0 sudo[120586]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:56 compute-0 ceph-mon[74207]: pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:56.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:56.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:56.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:37:56.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:37:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:56 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd154004330 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:57 compute-0 sudo[120738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlvbaldcwttzaowegppzorcttzszwqxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063476.9400542-272-129063486465832/AnsiballZ_ini_file.py'
Nov 25 09:37:57 compute-0 sudo[120738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:37:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:57.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:37:57 compute-0 python3.9[120740]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:37:57 compute-0 sudo[120738]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:57 compute-0 sudo[120891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wniwgkdzeohvwciczcqznvpgjwvymtdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063477.5123408-272-144782074035316/AnsiballZ_ini_file.py'
Nov 25 09:37:57 compute-0 sudo[120891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:37:57 compute-0 ceph-mon[74207]: pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:57 compute-0 python3.9[120893]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:37:57 compute-0 sudo[120891]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:57 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd150004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:58 compute-0 sudo[121044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgdarkxkpnjpyiseqizqtewuixtdfkpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063477.9570162-272-60886185568323/AnsiballZ_ini_file.py'
Nov 25 09:37:58 compute-0 sudo[121044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:58 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd154004330 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:58 compute-0 python3.9[121046]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:37:58 compute-0 sudo[121044]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:58 compute-0 sudo[121196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvtznzsvuoriikebotliycqrdoeonfci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063478.4067497-272-220676515029534/AnsiballZ_ini_file.py'
Nov 25 09:37:58 compute-0 sudo[121196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:58.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:58 compute-0 python3.9[121198]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:37:58 compute-0 sudo[121196]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:58 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd15c003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:37:59 compute-0 sudo[121298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:37:59 compute-0 sudo[121298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:37:59 compute-0 sudo[121298]: pam_unix(sudo:session): session closed for user root
Nov 25 09:37:59 compute-0 sudo[121373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmdfbzmltzsikdwwrqeejwmdeunojeyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063479.0254846-365-257233328184191/AnsiballZ_dnf.py'
Nov 25 09:37:59 compute-0 sudo[121373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:37:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:37:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:37:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:59.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:37:59 compute-0 python3.9[121375]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:37:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093759 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:37:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:37:59 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd154005430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:37:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:37:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:38:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:38:00 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd150004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:00 compute-0 ceph-mon[74207]: pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:38:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:38:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:38:00] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 25 09:38:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:38:00] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 25 09:38:00 compute-0 sudo[121373]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:00.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:38:00 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd150004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:01 compute-0 sudo[121528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktxhyyuhgmxynybknegsdvwffgoauhjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063480.8327353-398-39702555260612/AnsiballZ_setup.py'
Nov 25 09:38:01 compute-0 sudo[121528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:38:01 compute-0 anacron[4565]: Job `cron.daily' started
Nov 25 09:38:01 compute-0 anacron[4565]: Job `cron.daily' terminated
Nov 25 09:38:01 compute-0 python3.9[121530]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:38:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:01.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:01 compute-0 sudo[121528]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:01 compute-0 sudo[121684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfatifxbwvynrreagnjuhbilqlkxxvgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063481.456882-422-151052307559483/AnsiballZ_stat.py'
Nov 25 09:38:01 compute-0 sudo[121684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:01 compute-0 python3.9[121686]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:38:01 compute-0 sudo[121684]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:38:01 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd150004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:02 compute-0 sudo[121838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjmhkqahfetlxrfrjnlgajfoibgaxhxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063482.033413-449-127410212935672/AnsiballZ_stat.py'
Nov 25 09:38:02 compute-0 sudo[121838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:38:02 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd150004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:02 compute-0 ceph-mon[74207]: pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 09:38:02 compute-0 python3.9[121840]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:38:02 compute-0 sudo[121838]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:02.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:02 compute-0 sudo[121990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdphthhbrwhaiwngxoaxbrdcvnfdgany ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063482.6724997-479-68517235838387/AnsiballZ_command.py'
Nov 25 09:38:02 compute-0 sudo[121990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:02 compute-0 python3.9[121992]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:38:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[116875]: 25/11/2025 09:38:02 : epoch 692578dc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd150004040 fd 39 proxy ignored for local
Nov 25 09:38:03 compute-0 kernel: ganesha.nfsd[119059]: segfault at 50 ip 00007fd20e3a432e sp 00007fd1c67fb210 error 4 in libntirpc.so.5.8[7fd20e389000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 25 09:38:03 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:38:03 compute-0 sudo[121990]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:03 compute-0 systemd[1]: Started Process Core Dump (PID 121994/UID 0).
Nov 25 09:38:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:03.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:03 compute-0 sudo[122145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaobywdjvkwhujkuztvcbqyvcwovyenc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063483.3112264-509-111071198147480/AnsiballZ_service_facts.py'
Nov 25 09:38:03 compute-0 sudo[122145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:03 compute-0 python3.9[122147]: ansible-service_facts Invoked
Nov 25 09:38:03 compute-0 network[122165]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:38:03 compute-0 network[122166]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:38:03 compute-0 network[122167]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:38:03 compute-0 systemd-coredump[121995]: Process 116879 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 55:
                                                    #0  0x00007fd20e3a432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:38:04 compute-0 podman[122175]: 2025-11-25 09:38:04.079724563 +0000 UTC m=+0.018576325 container died 9671431d2cc7041cad268f763e1621afb7848bcaff0c37f8ccb22e9fe6f05c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 25 09:38:04 compute-0 podman[122175]: 2025-11-25 09:38:04.102129912 +0000 UTC m=+0.040981654 container remove 9671431d2cc7041cad268f763e1621afb7848bcaff0c37f8ccb22e9fe6f05c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:38:04 compute-0 ceph-mon[74207]: pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b67eda61695ba4816aca781e0108f8fc6e5c458892520d2a89c4ab81b3bc6ead-merged.mount: Deactivated successfully.
Nov 25 09:38:04 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:38:04 compute-0 systemd[1]: systemd-coredump@2-121994-0.service: Deactivated successfully.
Nov 25 09:38:04 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:38:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:04.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:05.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:05 compute-0 sudo[122145]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:06 compute-0 ceph-mon[74207]: pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:06.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:06.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:06.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:06.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:06.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:38:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:38:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:07.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:38:07 compute-0 sudo[122490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocfscdazodjxcefghkgzirqxlvffddjg ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764063487.355312-554-71886351029721/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764063487.355312-554-71886351029721/args'
Nov 25 09:38:07 compute-0 sudo[122490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:07 compute-0 sudo[122490]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:08 compute-0 sudo[122659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loyopdqzhbqgaxruwyizjuojiliwzabt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063487.9023035-587-18972156771636/AnsiballZ_dnf.py'
Nov 25 09:38:08 compute-0 sudo[122659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:08 compute-0 ceph-mon[74207]: pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:38:08 compute-0 python3.9[122661]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:38:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:08.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093809 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:38:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:38:09 compute-0 sudo[122659]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:38:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:09.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:38:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:38:10] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 25 09:38:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:38:10] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 25 09:38:10 compute-0 ceph-mon[74207]: pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:38:10 compute-0 sudo[122814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaessqquojrdnbdckwfjwzlevgopttsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063489.7924416-626-86039332530696/AnsiballZ_package_facts.py'
Nov 25 09:38:10 compute-0 sudo[122814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:10 compute-0 python3.9[122816]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 09:38:10 compute-0 sudo[122814]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:10.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Nov 25 09:38:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:11.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:11 compute-0 sudo[122966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhbtgdfzirmaftscrwtvahklvcqefgic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063491.3099663-656-89198194546068/AnsiballZ_stat.py'
Nov 25 09:38:11 compute-0 sudo[122966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:11 compute-0 python3.9[122968]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:11 compute-0 sudo[122966]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:11 compute-0 sudo[123045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spdvnmgwrzrbrhuhiuxphghltnktkrfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063491.3099663-656-89198194546068/AnsiballZ_file.py'
Nov 25 09:38:11 compute-0 sudo[123045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:12 compute-0 python3.9[123047]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:12 compute-0 sudo[123045]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:12 compute-0 ceph-mon[74207]: pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Nov 25 09:38:12 compute-0 sudo[123198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdnjtejgjzxbrgqppdrpfebkfemsysmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063492.3005443-692-272257956886763/AnsiballZ_stat.py'
Nov 25 09:38:12 compute-0 sudo[123198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:12 compute-0 python3.9[123200]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:12 compute-0 sudo[123198]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:12.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:12 compute-0 sudo[123276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdtnsbsjgsytjxhnuwkrukcnzdrokbnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063492.3005443-692-272257956886763/AnsiballZ_file.py'
Nov 25 09:38:12 compute-0 sudo[123276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:12 compute-0 python3.9[123278]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:12 compute-0 sudo[123276]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Nov 25 09:38:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:38:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:13.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:38:14 compute-0 sudo[123430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epygvjdvfliashdvnnjtpuugbtgorjfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063493.9050872-746-252888891182626/AnsiballZ_lineinfile.py'
Nov 25 09:38:14 compute-0 sudo[123430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:14 compute-0 ceph-mon[74207]: pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Nov 25 09:38:14 compute-0 python3.9[123432]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:14 compute-0 sudo[123430]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:14 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 3.
Nov 25 09:38:14 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:38:14 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:38:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:14.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:14 compute-0 podman[123495]: 2025-11-25 09:38:14.82174826 +0000 UTC m=+0.028655117 container create e02f789246dd6fd2b7da468085fc53bece883451d55b67498fa5acdc6d736601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63f9444d049f868877f0283043709282975868bae60ce4c5beeb82c2e80a2aa/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63f9444d049f868877f0283043709282975868bae60ce4c5beeb82c2e80a2aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63f9444d049f868877f0283043709282975868bae60ce4c5beeb82c2e80a2aa/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63f9444d049f868877f0283043709282975868bae60ce4c5beeb82c2e80a2aa/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:14 compute-0 podman[123495]: 2025-11-25 09:38:14.862929109 +0000 UTC m=+0.069835985 container init e02f789246dd6fd2b7da468085fc53bece883451d55b67498fa5acdc6d736601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:38:14 compute-0 podman[123495]: 2025-11-25 09:38:14.867412558 +0000 UTC m=+0.074319414 container start e02f789246dd6fd2b7da468085fc53bece883451d55b67498fa5acdc6d736601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 09:38:14 compute-0 bash[123495]: e02f789246dd6fd2b7da468085fc53bece883451d55b67498fa5acdc6d736601
Nov 25 09:38:14 compute-0 podman[123495]: 2025-11-25 09:38:14.810089541 +0000 UTC m=+0.016996418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:38:14 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:38:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:38:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:38:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:38:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:38:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:38:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:38:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:38:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:38:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:38:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:38:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:38:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:38:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:38:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:38:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:38:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:38:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Nov 25 09:38:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:38:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:15.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:15 compute-0 sudo[123674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgmibjlqdfybxoftlxhmhghlglrptrxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063495.4312658-791-119569809870011/AnsiballZ_setup.py'
Nov 25 09:38:15 compute-0 sudo[123674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:15 compute-0 python3.9[123676]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:38:16 compute-0 sudo[123674]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:16 compute-0 ceph-mon[74207]: pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Nov 25 09:38:16 compute-0 sudo[123760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biufrkfajsedrictuvxsgnraristjkxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063495.4312658-791-119569809870011/AnsiballZ_systemd.py'
Nov 25 09:38:16 compute-0 sudo[123760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:38:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:16.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:38:16 compute-0 python3.9[123762]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:38:16 compute-0 sudo[123760]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:16.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:16.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:16.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:16.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:17.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:17 compute-0 sshd-session[119067]: Connection closed by 192.168.122.30 port 37552
Nov 25 09:38:17 compute-0 sshd-session[119064]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:38:17 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 25 09:38:17 compute-0 systemd[1]: session-42.scope: Consumed 16.550s CPU time.
Nov 25 09:38:17 compute-0 systemd-logind[744]: Session 42 logged out. Waiting for processes to exit.
Nov 25 09:38:17 compute-0 systemd-logind[744]: Removed session 42.
Nov 25 09:38:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:18 compute-0 ceph-mon[74207]: pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:18.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:19 compute-0 sudo[123791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:19 compute-0 sudo[123791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:19 compute-0 sudo[123791]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:19.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:38:20] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 25 09:38:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:38:20] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 25 09:38:20 compute-0 ceph-mon[74207]: pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:38:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:38:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 25 09:38:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 25 09:38:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:38:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:38:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 09:38:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:38:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:38:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:38:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 09:38:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:21 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:38:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:21 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:38:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:21 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:38:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:38:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:21.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:22 compute-0 ceph-mon[74207]: pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:38:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:22.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:23 compute-0 sshd-session[123820]: Accepted publickey for zuul from 192.168.122.30 port 52908 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:38:23 compute-0 systemd-logind[744]: New session 43 of user zuul.
Nov 25 09:38:23 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 25 09:38:23 compute-0 sshd-session[123820]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:38:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 25 09:38:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:23.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:23 compute-0 sudo[123973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmqfrxyeioupnwttpxberufgexqffhqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063503.1662507-26-200124542822931/AnsiballZ_file.py'
Nov 25 09:38:23 compute-0 sudo[123973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:23 compute-0 python3.9[123975]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:23 compute-0 sudo[123973]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093823 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:38:24 compute-0 sudo[124127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvnazhxvknpqmcdxhonxgcaaqlcbpacp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063503.883908-62-234338101970916/AnsiballZ_stat.py'
Nov 25 09:38:24 compute-0 sudo[124127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:24 compute-0 ceph-mon[74207]: pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 25 09:38:24 compute-0 python3.9[124129]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:24 compute-0 sudo[124127]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:24 compute-0 sudo[124205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gziuykatyhimbiubddivvtfajcgaicgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063503.883908-62-234338101970916/AnsiballZ_file.py'
Nov 25 09:38:24 compute-0 sudo[124205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:24 compute-0 python3.9[124207]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:24 compute-0 sudo[124205]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:24.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:25 compute-0 sshd-session[123823]: Connection closed by 192.168.122.30 port 52908
Nov 25 09:38:25 compute-0 sshd-session[123820]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:38:25 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 25 09:38:25 compute-0 systemd[1]: session-43.scope: Consumed 1.012s CPU time.
Nov 25 09:38:25 compute-0 systemd-logind[744]: Session 43 logged out. Waiting for processes to exit.
Nov 25 09:38:25 compute-0 systemd-logind[744]: Removed session 43.
Nov 25 09:38:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 25 09:38:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:25.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:26 compute-0 ceph-mon[74207]: pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 25 09:38:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:26.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:26.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:26.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:26.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:26.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000009:nfs.cephfs.2: -2
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:38:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Nov 25 09:38:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:38:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:27.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:38:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc060000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:28 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054001e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:28 compute-0 ceph-mon[74207]: pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Nov 25 09:38:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:28.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:29 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:38:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:38:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:29.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:38:29 compute-0 sshd-session[124253]: Accepted publickey for zuul from 192.168.122.30 port 40008 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:38:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:29 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc060000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:29 compute-0 systemd-logind[744]: New session 44 of user zuul.
Nov 25 09:38:29 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 25 09:38:29 compute-0 sshd-session[124253]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:38:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:38:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:38:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:30 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054002a90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:38:30] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:38:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:38:30] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:38:30 compute-0 ceph-mon[74207]: pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:38:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:38:30 compute-0 python3.9[124407]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:38:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:30.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093831 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:38:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:31 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054002a90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:38:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:31.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:31 compute-0 sudo[124561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-airajfxeejjmcxuaroviqxdbbwmylgbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063511.033218-59-175030997313365/AnsiballZ_file.py'
Nov 25 09:38:31 compute-0 sudo[124561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:31 compute-0 python3.9[124563]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:31 compute-0 sudo[124561]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:31 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c0025e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:32 compute-0 sudo[124738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiprhbfnggqsiyniqcriugallekiboiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063511.663004-83-122693182417334/AnsiballZ_stat.py'
Nov 25 09:38:32 compute-0 sudo[124738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:32 compute-0 python3.9[124740]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:32 compute-0 sudo[124738]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:32 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:32 compute-0 ceph-mon[74207]: pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:38:32 compute-0 sudo[124816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smeuoyepowyptzvxfoyxkjlfyhbqtybo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063511.663004-83-122693182417334/AnsiballZ_file.py'
Nov 25 09:38:32 compute-0 sudo[124816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:32 compute-0 python3.9[124818]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.gg2jgcuz recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:32 compute-0 sudo[124816]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:38:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:32.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:38:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:33 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:33 compute-0 sudo[124968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mszpjchqetzmyzfqjsmtwofjthirdpzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063512.9914496-143-3129146366395/AnsiballZ_stat.py'
Nov 25 09:38:33 compute-0 sudo[124968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:33.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:33 compute-0 python3.9[124970]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:33 compute-0 sudo[124968]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:33 compute-0 sudo[125046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eylhgxllqffqbwbrgiwxncvhmzcvlegj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063512.9914496-143-3129146366395/AnsiballZ_file.py'
Nov 25 09:38:33 compute-0 sudo[125046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:33 compute-0 python3.9[125048]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.fixpp_ph recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:33 compute-0 sudo[125046]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:33 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0540037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:34 compute-0 sudo[125200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boenbxzugbdkeupgshzryxvyychnthwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063513.8704958-182-209071661824067/AnsiballZ_file.py'
Nov 25 09:38:34 compute-0 sudo[125200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:34 compute-0 python3.9[125202]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:38:34 compute-0 sudo[125200]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:34 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c0025e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:34 compute-0 ceph-mon[74207]: pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:34 compute-0 sudo[125352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpxiyjedngftjiwngmqmpltifmugjbut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063514.3715594-206-138318416149719/AnsiballZ_stat.py'
Nov 25 09:38:34 compute-0 sudo[125352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:34 compute-0 python3.9[125354]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:34.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:34 compute-0 sudo[125352]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:34 compute-0 sudo[125430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghdxufhllvluoffsjfmbvehtwcukunut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063514.3715594-206-138318416149719/AnsiballZ_file.py'
Nov 25 09:38:34 compute-0 sudo[125430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:35 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:35 compute-0 python3.9[125432]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:38:35 compute-0 sudo[125430]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:35 compute-0 sudo[125582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txtuevhnycumxxvyfspedbdgyvhgckme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063515.1345768-206-195558840858141/AnsiballZ_stat.py'
Nov 25 09:38:35 compute-0 sudo[125582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:35.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:35 compute-0 python3.9[125584]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:35 compute-0 sudo[125582]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:35 compute-0 sudo[125660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uplcxfflfzpqngxyniyvfzfkzqiuisre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063515.1345768-206-195558840858141/AnsiballZ_file.py'
Nov 25 09:38:35 compute-0 sudo[125660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:35 compute-0 python3.9[125662]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:38:35 compute-0 sudo[125660]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:35 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:36 compute-0 sudo[125814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqcwevzkgsmjveustmwsmjecglaintmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063515.9272594-275-60379565430512/AnsiballZ_file.py'
Nov 25 09:38:36 compute-0 sudo[125814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:36 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0540037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:36 compute-0 python3.9[125816]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:36 compute-0 sudo[125814]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:36 compute-0 ceph-mon[74207]: pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:36 compute-0 sudo[125966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzeacsnpzihwwnmmlkiezgrnfnxfqekd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063516.3941402-299-168220506507399/AnsiballZ_stat.py'
Nov 25 09:38:36 compute-0 sudo[125966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:36.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:36 compute-0 python3.9[125968]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:36 compute-0 sudo[125966]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:36 compute-0 sudo[126044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppvsefixtdlqdkxvitmdovnssiyshvbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063516.3941402-299-168220506507399/AnsiballZ_file.py'
Nov 25 09:38:36 compute-0 sudo[126044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:36.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:36.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:36.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:36.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:37 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c0025e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:37 compute-0 python3.9[126046]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:37 compute-0 sudo[126044]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:37.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:37 compute-0 sudo[126196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xljohdecdcwkjuigumrnebrmeligubvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063517.1952295-335-109343598661108/AnsiballZ_stat.py'
Nov 25 09:38:37 compute-0 sudo[126196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:37 compute-0 python3.9[126198]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:37 compute-0 sudo[126196]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:37 compute-0 sudo[126275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eutpkiohenhuignzwjpukocjaeslixlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063517.1952295-335-109343598661108/AnsiballZ_file.py'
Nov 25 09:38:37 compute-0 sudo[126275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:37 compute-0 python3.9[126277]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:37 compute-0 sudo[126275]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:37 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:38 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:38 compute-0 ceph-mon[74207]: pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:38:38 compute-0 sudo[126428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixvpbmosetchmokgnifrbcgtgxjdiuul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063518.0062141-371-22556903931155/AnsiballZ_systemd.py'
Nov 25 09:38:38 compute-0 sudo[126428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:38 compute-0 python3.9[126430]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:38:38 compute-0 systemd[1]: Reloading.
Nov 25 09:38:38 compute-0 systemd-rc-local-generator[126452]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:38:38 compute-0 systemd-sysv-generator[126455]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:38:39 compute-0 sudo[126428]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:39 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0540044b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:38:39 compute-0 sudo[126568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:39 compute-0 sudo[126568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:39 compute-0 sudo[126568]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:39 compute-0 sudo[126643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-norpsfpmxgounuufwgbfivdtymmtmyzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063519.1418984-395-167068881102073/AnsiballZ_stat.py'
Nov 25 09:38:39 compute-0 sudo[126643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:39.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:39 compute-0 python3.9[126645]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:39 compute-0 sudo[126643]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:39 compute-0 sudo[126721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kolduudnzzxyfehgddetggmgftzbhbix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063519.1418984-395-167068881102073/AnsiballZ_file.py'
Nov 25 09:38:39 compute-0 sudo[126721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:39 compute-0 python3.9[126723]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:39 compute-0 sudo[126721]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:39 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c003a50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:40 compute-0 sudo[126875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxwkpozodzveuquailybswjewcnetrvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063519.9621909-431-206369841631965/AnsiballZ_stat.py'
Nov 25 09:38:40 compute-0 sudo[126875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:40 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:38:40] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:38:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:38:40] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:38:40 compute-0 python3.9[126877]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:40 compute-0 sudo[126875]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:40 compute-0 ceph-mon[74207]: pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:38:40 compute-0 sudo[126953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slfsrkopswydytutvmcfvxmshobblnmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063519.9621909-431-206369841631965/AnsiballZ_file.py'
Nov 25 09:38:40 compute-0 sudo[126953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:40 compute-0 python3.9[126955]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:40 compute-0 sudo[126953]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:40.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:40 compute-0 sudo[127105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llebpqiqiromrumomnguprywxxpsfcfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063520.7694864-467-42794502561223/AnsiballZ_systemd.py'
Nov 25 09:38:40 compute-0 sudo[127105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:41 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:38:41 compute-0 python3.9[127107]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:38:41 compute-0 systemd[1]: Reloading.
Nov 25 09:38:41 compute-0 systemd-rc-local-generator[127131]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:38:41 compute-0 systemd-sysv-generator[127135]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:38:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:41.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:41 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 09:38:41 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 09:38:41 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 09:38:41 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 09:38:41 compute-0 sudo[127105]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:41 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:42 compute-0 python3.9[127300]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:38:42 compute-0 network[127317]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:38:42 compute-0 network[127318]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:38:42 compute-0 network[127319]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:38:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:42 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c003a50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:42 compute-0 ceph-mon[74207]: pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:38:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:42.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:42 compute-0 sudo[127347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:38:42 compute-0 sudo[127347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:42 compute-0 sudo[127347]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:42 compute-0 sudo[127376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:38:42 compute-0 sudo[127376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:43 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:43 compute-0 podman[127489]: 2025-11-25 09:38:43.324557973 +0000 UTC m=+0.049017315 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 25 09:38:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:43.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:43 compute-0 podman[127489]: 2025-11-25 09:38:43.400101874 +0000 UTC m=+0.124561216 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:38:43 compute-0 podman[127598]: 2025-11-25 09:38:43.742018831 +0000 UTC m=+0.034331774 container exec e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:38:43 compute-0 podman[127598]: 2025-11-25 09:38:43.750269147 +0000 UTC m=+0.042582070 container exec_died e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:38:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:43 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:43 compute-0 podman[127671]: 2025-11-25 09:38:43.933356095 +0000 UTC m=+0.033973838 container exec 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:38:43 compute-0 podman[127671]: 2025-11-25 09:38:43.950742978 +0000 UTC m=+0.051360721 container exec_died 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:38:44 compute-0 podman[127729]: 2025-11-25 09:38:44.084306506 +0000 UTC m=+0.034269887 container exec c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:38:44 compute-0 podman[127729]: 2025-11-25 09:38:44.200150535 +0000 UTC m=+0.150113896 container exec_died c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:38:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:44 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:44 compute-0 podman[127792]: 2025-11-25 09:38:44.34627838 +0000 UTC m=+0.040088602 container exec e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:38:44 compute-0 podman[127792]: 2025-11-25 09:38:44.354099928 +0000 UTC m=+0.047910130 container exec_died e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:38:44 compute-0 ceph-mon[74207]: pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:44 compute-0 podman[127859]: 2025-11-25 09:38:44.495242558 +0000 UTC m=+0.037534148 container exec 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 09:38:44 compute-0 podman[127859]: 2025-11-25 09:38:44.507077299 +0000 UTC m=+0.049368879 container exec_died 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, release=1793, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 25 09:38:44 compute-0 podman[127925]: 2025-11-25 09:38:44.65899294 +0000 UTC m=+0.039650266 container exec 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:38:44 compute-0 podman[127925]: 2025-11-25 09:38:44.68637194 +0000 UTC m=+0.067029257 container exec_died 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:38:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:44 compute-0 podman[127986]: 2025-11-25 09:38:44.801946111 +0000 UTC m=+0.036280876 container exec e02f789246dd6fd2b7da468085fc53bece883451d55b67498fa5acdc6d736601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:38:44 compute-0 podman[127986]: 2025-11-25 09:38:44.808083005 +0000 UTC m=+0.042417770 container exec_died e02f789246dd6fd2b7da468085fc53bece883451d55b67498fa5acdc6d736601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:38:44
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'images', '.mgr', '.rgw.root', 'backups', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.nfs', 'default.rgw.control']
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:38:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:38:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:38:44 compute-0 sudo[127376]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:38:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:38:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:38:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:38:44 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:38:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:38:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:38:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:38:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:38:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:38:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:38:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:38:45 compute-0 sudo[128067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:38:45 compute-0 sudo[128067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:45 compute-0 sudo[128067]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:45 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c003a50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:45 compute-0 sudo[128092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:38:45 compute-0 sudo[128092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:45.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:38:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:45 compute-0 sudo[128092]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:38:45 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:38:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:38:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:38:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:38:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:38:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:38:45 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:38:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:38:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:38:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:38:45 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:38:45 compute-0 sudo[128146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:38:45 compute-0 sudo[128146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:45 compute-0 sudo[128146]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:45 compute-0 sudo[128171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:38:45 compute-0 sudo[128171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:45 compute-0 sudo[128349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btjdijzsbyqcnumnhkvjaspaduxilqib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063525.5462184-545-226438765323379/AnsiballZ_stat.py'
Nov 25 09:38:45 compute-0 sudo[128349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:45 compute-0 podman[128356]: 2025-11-25 09:38:45.78473666 +0000 UTC m=+0.027620486 container create 6c35affa9d9b2b50a46c22e07c81d0ccbcb0bfb12ddbd052f33675208bb4fd7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackburn, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:38:45 compute-0 systemd[1]: Started libpod-conmon-6c35affa9d9b2b50a46c22e07c81d0ccbcb0bfb12ddbd052f33675208bb4fd7d.scope.
Nov 25 09:38:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:38:45 compute-0 podman[128356]: 2025-11-25 09:38:45.832992742 +0000 UTC m=+0.075876568 container init 6c35affa9d9b2b50a46c22e07c81d0ccbcb0bfb12ddbd052f33675208bb4fd7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:38:45 compute-0 podman[128356]: 2025-11-25 09:38:45.837346074 +0000 UTC m=+0.080229891 container start 6c35affa9d9b2b50a46c22e07c81d0ccbcb0bfb12ddbd052f33675208bb4fd7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackburn, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:38:45 compute-0 podman[128356]: 2025-11-25 09:38:45.838516109 +0000 UTC m=+0.081399926 container attach 6c35affa9d9b2b50a46c22e07c81d0ccbcb0bfb12ddbd052f33675208bb4fd7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:38:45 compute-0 xenodochial_blackburn[128369]: 167 167
Nov 25 09:38:45 compute-0 systemd[1]: libpod-6c35affa9d9b2b50a46c22e07c81d0ccbcb0bfb12ddbd052f33675208bb4fd7d.scope: Deactivated successfully.
Nov 25 09:38:45 compute-0 podman[128356]: 2025-11-25 09:38:45.840388838 +0000 UTC m=+0.083272664 container died 6c35affa9d9b2b50a46c22e07c81d0ccbcb0bfb12ddbd052f33675208bb4fd7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 09:38:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ebcc71f83dbf69abe2bf79f6aeeb3e0fedadae5eacd0ab90475bc4f0bb9a06e-merged.mount: Deactivated successfully.
Nov 25 09:38:45 compute-0 podman[128356]: 2025-11-25 09:38:45.857680401 +0000 UTC m=+0.100564217 container remove 6c35affa9d9b2b50a46c22e07c81d0ccbcb0bfb12ddbd052f33675208bb4fd7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:38:45 compute-0 podman[128356]: 2025-11-25 09:38:45.774310584 +0000 UTC m=+0.017194410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:38:45 compute-0 python3.9[128354]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:45 compute-0 systemd[1]: libpod-conmon-6c35affa9d9b2b50a46c22e07c81d0ccbcb0bfb12ddbd052f33675208bb4fd7d.scope: Deactivated successfully.
Nov 25 09:38:45 compute-0 sudo[128349]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:45 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c003a50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:45 compute-0 podman[128395]: 2025-11-25 09:38:45.966786915 +0000 UTC m=+0.029153416 container create e98906822325eb639a646dedf27fc0bf01b0cb962b30082feea31e45fb3e3d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 09:38:45 compute-0 systemd[1]: Started libpod-conmon-e98906822325eb639a646dedf27fc0bf01b0cb962b30082feea31e45fb3e3d02.scope.
Nov 25 09:38:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f6f7c686f1b2a4ce126ce0cd3536d0e48a16ef308c7604efdd614b17931b90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f6f7c686f1b2a4ce126ce0cd3536d0e48a16ef308c7604efdd614b17931b90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f6f7c686f1b2a4ce126ce0cd3536d0e48a16ef308c7604efdd614b17931b90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f6f7c686f1b2a4ce126ce0cd3536d0e48a16ef308c7604efdd614b17931b90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f6f7c686f1b2a4ce126ce0cd3536d0e48a16ef308c7604efdd614b17931b90/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:46 compute-0 podman[128395]: 2025-11-25 09:38:46.026648124 +0000 UTC m=+0.089014645 container init e98906822325eb639a646dedf27fc0bf01b0cb962b30082feea31e45fb3e3d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_vaughan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:38:46 compute-0 podman[128395]: 2025-11-25 09:38:46.032355337 +0000 UTC m=+0.094721838 container start e98906822325eb639a646dedf27fc0bf01b0cb962b30082feea31e45fb3e3d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:38:46 compute-0 podman[128395]: 2025-11-25 09:38:46.033558466 +0000 UTC m=+0.095924986 container attach e98906822325eb639a646dedf27fc0bf01b0cb962b30082feea31e45fb3e3d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_vaughan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:38:46 compute-0 podman[128395]: 2025-11-25 09:38:45.9553801 +0000 UTC m=+0.017746601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:38:46 compute-0 sudo[128486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hevdyecpxgkuqvuoubnjzdrincseftts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063525.5462184-545-226438765323379/AnsiballZ_file.py'
Nov 25 09:38:46 compute-0 sudo[128486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:46 compute-0 python3.9[128488]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:46 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:46 compute-0 sudo[128486]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:46 compute-0 confident_vaughan[128432]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:38:46 compute-0 confident_vaughan[128432]: --> All data devices are unavailable
Nov 25 09:38:46 compute-0 systemd[1]: libpod-e98906822325eb639a646dedf27fc0bf01b0cb962b30082feea31e45fb3e3d02.scope: Deactivated successfully.
Nov 25 09:38:46 compute-0 podman[128395]: 2025-11-25 09:38:46.310757702 +0000 UTC m=+0.373124203 container died e98906822325eb639a646dedf27fc0bf01b0cb962b30082feea31e45fb3e3d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 25 09:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-59f6f7c686f1b2a4ce126ce0cd3536d0e48a16ef308c7604efdd614b17931b90-merged.mount: Deactivated successfully.
Nov 25 09:38:46 compute-0 podman[128395]: 2025-11-25 09:38:46.33624609 +0000 UTC m=+0.398612591 container remove e98906822325eb639a646dedf27fc0bf01b0cb962b30082feea31e45fb3e3d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_vaughan, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:38:46 compute-0 systemd[1]: libpod-conmon-e98906822325eb639a646dedf27fc0bf01b0cb962b30082feea31e45fb3e3d02.scope: Deactivated successfully.
Nov 25 09:38:46 compute-0 sudo[128171]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:46 compute-0 ceph-mon[74207]: pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:38:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:38:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:38:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:38:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:38:46 compute-0 sudo[128533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:38:46 compute-0 sudo[128533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:46 compute-0 sudo[128533]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:46 compute-0 sudo[128581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:38:46 compute-0 sudo[128581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:46 compute-0 sudo[128708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kybbjluiokukcjkkjrtotjiwspgxdefe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063526.419118-584-69144964058930/AnsiballZ_file.py'
Nov 25 09:38:46 compute-0 sudo[128708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:46.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:46 compute-0 podman[128741]: 2025-11-25 09:38:46.738674401 +0000 UTC m=+0.028518028 container create b22130720207918a5cbba52620d61799d98291b4d7ca55d38bbc4fa559957028 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:38:46 compute-0 python3.9[128710]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:46 compute-0 sudo[128708]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:46 compute-0 systemd[1]: Started libpod-conmon-b22130720207918a5cbba52620d61799d98291b4d7ca55d38bbc4fa559957028.scope.
Nov 25 09:38:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:38:46 compute-0 podman[128741]: 2025-11-25 09:38:46.790970615 +0000 UTC m=+0.080814262 container init b22130720207918a5cbba52620d61799d98291b4d7ca55d38bbc4fa559957028 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kalam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:38:46 compute-0 podman[128741]: 2025-11-25 09:38:46.795062946 +0000 UTC m=+0.084906573 container start b22130720207918a5cbba52620d61799d98291b4d7ca55d38bbc4fa559957028 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kalam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:38:46 compute-0 festive_kalam[128754]: 167 167
Nov 25 09:38:46 compute-0 systemd[1]: libpod-b22130720207918a5cbba52620d61799d98291b4d7ca55d38bbc4fa559957028.scope: Deactivated successfully.
Nov 25 09:38:46 compute-0 podman[128741]: 2025-11-25 09:38:46.798843901 +0000 UTC m=+0.088687548 container attach b22130720207918a5cbba52620d61799d98291b4d7ca55d38bbc4fa559957028 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kalam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:38:46 compute-0 podman[128741]: 2025-11-25 09:38:46.799053336 +0000 UTC m=+0.088896963 container died b22130720207918a5cbba52620d61799d98291b4d7ca55d38bbc4fa559957028 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a789da1268fa263c047dddb03cb737f141b0e6b77fe95dd7e5c92c1d9db9564-merged.mount: Deactivated successfully.
Nov 25 09:38:46 compute-0 podman[128741]: 2025-11-25 09:38:46.819972995 +0000 UTC m=+0.109816622 container remove b22130720207918a5cbba52620d61799d98291b4d7ca55d38bbc4fa559957028 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kalam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 09:38:46 compute-0 podman[128741]: 2025-11-25 09:38:46.725259653 +0000 UTC m=+0.015103301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:38:46 compute-0 systemd[1]: libpod-conmon-b22130720207918a5cbba52620d61799d98291b4d7ca55d38bbc4fa559957028.scope: Deactivated successfully.
Nov 25 09:38:46 compute-0 podman[128823]: 2025-11-25 09:38:46.927342867 +0000 UTC m=+0.027602203 container create b89f8f607a76ef6c3acce567532e8e877a0b5be1bc342bdf5e47391d45062e9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lumiere, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:38:46 compute-0 systemd[1]: Started libpod-conmon-b89f8f607a76ef6c3acce567532e8e877a0b5be1bc342bdf5e47391d45062e9d.scope.
Nov 25 09:38:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:46.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:38:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:46.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:46.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:46.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c0cdec6756251dc25c44789498a33f5d3f67b6778f576f5e8936ac1287fda4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c0cdec6756251dc25c44789498a33f5d3f67b6778f576f5e8936ac1287fda4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c0cdec6756251dc25c44789498a33f5d3f67b6778f576f5e8936ac1287fda4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c0cdec6756251dc25c44789498a33f5d3f67b6778f576f5e8936ac1287fda4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:46 compute-0 podman[128823]: 2025-11-25 09:38:46.98133229 +0000 UTC m=+0.081591636 container init b89f8f607a76ef6c3acce567532e8e877a0b5be1bc342bdf5e47391d45062e9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lumiere, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:38:46 compute-0 podman[128823]: 2025-11-25 09:38:46.986613071 +0000 UTC m=+0.086872397 container start b89f8f607a76ef6c3acce567532e8e877a0b5be1bc342bdf5e47391d45062e9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:38:46 compute-0 podman[128823]: 2025-11-25 09:38:46.987722903 +0000 UTC m=+0.087982249 container attach b89f8f607a76ef6c3acce567532e8e877a0b5be1bc342bdf5e47391d45062e9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lumiere, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:38:47 compute-0 podman[128823]: 2025-11-25 09:38:46.916499164 +0000 UTC m=+0.016758510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:38:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:47 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:47 compute-0 sudo[128943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epamdzxsldmemmsoyhcfhshikrcmgdfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063526.8908455-608-125962579365705/AnsiballZ_stat.py'
Nov 25 09:38:47 compute-0 sudo[128943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]: {
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:     "1": [
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:         {
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             "devices": [
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "/dev/loop3"
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             ],
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             "lv_name": "ceph_lv0",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             "lv_size": "21470642176",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             "name": "ceph_lv0",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             "tags": {
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.cluster_name": "ceph",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.crush_device_class": "",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.encrypted": "0",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.osd_id": "1",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.type": "block",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.vdo": "0",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:                 "ceph.with_tpm": "0"
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             },
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             "type": "block",
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:             "vg_name": "ceph_vg0"
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:         }
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]:     ]
Nov 25 09:38:47 compute-0 quirky_lumiere[128868]: }
Nov 25 09:38:47 compute-0 systemd[1]: libpod-b89f8f607a76ef6c3acce567532e8e877a0b5be1bc342bdf5e47391d45062e9d.scope: Deactivated successfully.
Nov 25 09:38:47 compute-0 podman[128823]: 2025-11-25 09:38:47.220841898 +0000 UTC m=+0.321101234 container died b89f8f607a76ef6c3acce567532e8e877a0b5be1bc342bdf5e47391d45062e9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lumiere, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-24c0cdec6756251dc25c44789498a33f5d3f67b6778f576f5e8936ac1287fda4-merged.mount: Deactivated successfully.
Nov 25 09:38:47 compute-0 python3.9[128945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:47 compute-0 podman[128823]: 2025-11-25 09:38:47.241158962 +0000 UTC m=+0.341418299 container remove b89f8f607a76ef6c3acce567532e8e877a0b5be1bc342bdf5e47391d45062e9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lumiere, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:38:47 compute-0 sudo[128943]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:47 compute-0 sudo[128581]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:47 compute-0 systemd[1]: libpod-conmon-b89f8f607a76ef6c3acce567532e8e877a0b5be1bc342bdf5e47391d45062e9d.scope: Deactivated successfully.
Nov 25 09:38:47 compute-0 sudo[128963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:38:47 compute-0 sudo[128963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:47 compute-0 sudo[128963]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:47.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:47 compute-0 sudo[129011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:38:47 compute-0 sudo[129011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:47 compute-0 sudo[129086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbzaargjufvszwyjbznachnspjdhvqdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063526.8908455-608-125962579365705/AnsiballZ_file.py'
Nov 25 09:38:47 compute-0 sudo[129086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:47 compute-0 python3.9[129088]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:47 compute-0 sudo[129086]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:47 compute-0 podman[129132]: 2025-11-25 09:38:47.628225215 +0000 UTC m=+0.025084569 container create 836462c851f8c41ba250e5b73df76ba12abbf8bc0ac9c69c02f7de39f69c9c2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:38:47 compute-0 systemd[1]: Started libpod-conmon-836462c851f8c41ba250e5b73df76ba12abbf8bc0ac9c69c02f7de39f69c9c2f.scope.
Nov 25 09:38:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:38:47 compute-0 podman[129132]: 2025-11-25 09:38:47.679947218 +0000 UTC m=+0.076806582 container init 836462c851f8c41ba250e5b73df76ba12abbf8bc0ac9c69c02f7de39f69c9c2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 25 09:38:47 compute-0 podman[129132]: 2025-11-25 09:38:47.684782408 +0000 UTC m=+0.081641761 container start 836462c851f8c41ba250e5b73df76ba12abbf8bc0ac9c69c02f7de39f69c9c2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_mahavira, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:38:47 compute-0 podman[129132]: 2025-11-25 09:38:47.686011565 +0000 UTC m=+0.082870918 container attach 836462c851f8c41ba250e5b73df76ba12abbf8bc0ac9c69c02f7de39f69c9c2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:38:47 compute-0 hungry_mahavira[129156]: 167 167
Nov 25 09:38:47 compute-0 systemd[1]: libpod-836462c851f8c41ba250e5b73df76ba12abbf8bc0ac9c69c02f7de39f69c9c2f.scope: Deactivated successfully.
Nov 25 09:38:47 compute-0 podman[129132]: 2025-11-25 09:38:47.68798893 +0000 UTC m=+0.084848295 container died 836462c851f8c41ba250e5b73df76ba12abbf8bc0ac9c69c02f7de39f69c9c2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-53cb838df537230f66a33b26be8081a616b36a025ca115bfad2cfd6743890bd9-merged.mount: Deactivated successfully.
Nov 25 09:38:47 compute-0 podman[129132]: 2025-11-25 09:38:47.704486758 +0000 UTC m=+0.101346112 container remove 836462c851f8c41ba250e5b73df76ba12abbf8bc0ac9c69c02f7de39f69c9c2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_mahavira, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:38:47 compute-0 podman[129132]: 2025-11-25 09:38:47.61814491 +0000 UTC m=+0.015004285 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:38:47 compute-0 systemd[1]: libpod-conmon-836462c851f8c41ba250e5b73df76ba12abbf8bc0ac9c69c02f7de39f69c9c2f.scope: Deactivated successfully.
Nov 25 09:38:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:47 compute-0 podman[129179]: 2025-11-25 09:38:47.817398582 +0000 UTC m=+0.027940380 container create dd9abc604987572c075bfa7a5a62fbf58aa7d905400b8064bb10f2984f35d2e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_goldstine, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:38:47 compute-0 systemd[1]: Started libpod-conmon-dd9abc604987572c075bfa7a5a62fbf58aa7d905400b8064bb10f2984f35d2e1.scope.
Nov 25 09:38:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/813ef89f0845e77695f3d61bb0ecd75fb35c6e83b58edd0b160dd12c58f64fb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/813ef89f0845e77695f3d61bb0ecd75fb35c6e83b58edd0b160dd12c58f64fb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/813ef89f0845e77695f3d61bb0ecd75fb35c6e83b58edd0b160dd12c58f64fb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/813ef89f0845e77695f3d61bb0ecd75fb35c6e83b58edd0b160dd12c58f64fb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:38:47 compute-0 podman[129179]: 2025-11-25 09:38:47.869229388 +0000 UTC m=+0.079771206 container init dd9abc604987572c075bfa7a5a62fbf58aa7d905400b8064bb10f2984f35d2e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:38:47 compute-0 podman[129179]: 2025-11-25 09:38:47.873701666 +0000 UTC m=+0.084243464 container start dd9abc604987572c075bfa7a5a62fbf58aa7d905400b8064bb10f2984f35d2e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_goldstine, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:38:47 compute-0 podman[129179]: 2025-11-25 09:38:47.874906707 +0000 UTC m=+0.085448505 container attach dd9abc604987572c075bfa7a5a62fbf58aa7d905400b8064bb10f2984f35d2e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_goldstine, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:38:47 compute-0 podman[129179]: 2025-11-25 09:38:47.806634969 +0000 UTC m=+0.017176788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:38:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:47 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c003a50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:48 compute-0 sudo[129382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrvqpydkotcjavmwufgdjgbdakxxywjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063527.9096673-653-42770026849737/AnsiballZ_timezone.py'
Nov 25 09:38:48 compute-0 sudo[129382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:48 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c003a50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:48 compute-0 frosty_goldstine[129195]: {}
Nov 25 09:38:48 compute-0 lvm[129401]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:38:48 compute-0 lvm[129401]: VG ceph_vg0 finished
Nov 25 09:38:48 compute-0 systemd[1]: libpod-dd9abc604987572c075bfa7a5a62fbf58aa7d905400b8064bb10f2984f35d2e1.scope: Deactivated successfully.
Nov 25 09:38:48 compute-0 podman[129179]: 2025-11-25 09:38:48.352025689 +0000 UTC m=+0.562567488 container died dd9abc604987572c075bfa7a5a62fbf58aa7d905400b8064bb10f2984f35d2e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_goldstine, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-813ef89f0845e77695f3d61bb0ecd75fb35c6e83b58edd0b160dd12c58f64fb2-merged.mount: Deactivated successfully.
Nov 25 09:38:48 compute-0 podman[129179]: 2025-11-25 09:38:48.3801949 +0000 UTC m=+0.590736698 container remove dd9abc604987572c075bfa7a5a62fbf58aa7d905400b8064bb10f2984f35d2e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_goldstine, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:38:48 compute-0 systemd[1]: libpod-conmon-dd9abc604987572c075bfa7a5a62fbf58aa7d905400b8064bb10f2984f35d2e1.scope: Deactivated successfully.
Nov 25 09:38:48 compute-0 ceph-mon[74207]: pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:48 compute-0 python3.9[129389]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 09:38:48 compute-0 sudo[129011]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:38:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:38:48 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:48 compute-0 systemd[1]: Starting Time & Date Service...
Nov 25 09:38:48 compute-0 sudo[129415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:38:48 compute-0 sudo[129415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:48 compute-0 sudo[129415]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:48 compute-0 systemd[1]: Started Time & Date Service.
Nov 25 09:38:48 compute-0 sudo[129382]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:48 compute-0 sudo[129591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayzjucklsmpxlfpgvqzoxufoknwpjhif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063528.740648-680-249313061155767/AnsiballZ_file.py'
Nov 25 09:38:48 compute-0 sudo[129591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:49 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:49 compute-0 python3.9[129593]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:49 compute-0 sudo[129591]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:49.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:49 compute-0 sudo[129743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hycwaigtfhygabxzmqmfrdedelegymkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063529.2286208-704-66897394561949/AnsiballZ_stat.py'
Nov 25 09:38:49 compute-0 sudo[129743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:38:49 compute-0 python3.9[129745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:49 compute-0 sudo[129743]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:49 compute-0 sudo[129822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfwohtbxtwbnnffalwqvkuivfzjvgwaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063529.2286208-704-66897394561949/AnsiballZ_file.py'
Nov 25 09:38:49 compute-0 sudo[129822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:49 compute-0 python3.9[129824]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:49 compute-0 sudo[129822]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:49 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:50 compute-0 sudo[129975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwnbgoanjdwbmyrlmhtwhbpfoduwhvzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063530.048103-740-193639441091766/AnsiballZ_stat.py'
Nov 25 09:38:50 compute-0 sudo[129975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:38:50] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 25 09:38:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:38:50] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Nov 25 09:38:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:50 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:50 compute-0 python3.9[129977]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:50 compute-0 sudo[129975]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:50 compute-0 ceph-mon[74207]: pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:50 compute-0 sudo[130053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyqvmishxrobilneepmaxpvfaoqnqibm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063530.048103-740-193639441091766/AnsiballZ_file.py'
Nov 25 09:38:50 compute-0 sudo[130053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:50 compute-0 python3.9[130055]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4zdyj_qk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:50 compute-0 sudo[130053]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:50.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:51 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c003a50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:51 compute-0 sudo[130205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jroaroitfqfqbcunmcvggoxzwsojcdnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063530.9006772-776-265654681817634/AnsiballZ_stat.py'
Nov 25 09:38:51 compute-0 sudo[130205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:38:51 compute-0 python3.9[130207]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:51 compute-0 sudo[130205]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.002000018s ======
Nov 25 09:38:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:51.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000018s
Nov 25 09:38:51 compute-0 sudo[130283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juvuxpwtcrloskyodxdglhacdybfracj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063530.9006772-776-265654681817634/AnsiballZ_file.py'
Nov 25 09:38:51 compute-0 sudo[130283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:51 compute-0 python3.9[130285]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:51 compute-0 sudo[130283]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:51 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:52 compute-0 sudo[130437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gysjijbrlmenuvwsernkjtwsvehmaqhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063531.8826845-815-225028993412679/AnsiballZ_command.py'
Nov 25 09:38:52 compute-0 sudo[130437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:52 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:52 compute-0 python3.9[130439]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:38:52 compute-0 sudo[130437]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:52 compute-0 ceph-mon[74207]: pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:38:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:52.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:52 compute-0 sudo[130590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsplchjtfxtiftxotdemxzdbpkzgsndk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063532.5591547-839-230091328060487/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 09:38:52 compute-0 sudo[130590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:53 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:53 compute-0 python3[130592]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 09:38:53 compute-0 sudo[130590]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:53.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:53 compute-0 sudo[130742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcrvqivdyuhspillhxgxpcgiujzfcgjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063533.2017527-863-107827446808796/AnsiballZ_stat.py'
Nov 25 09:38:53 compute-0 sudo[130742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:53 compute-0 python3.9[130744]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:53 compute-0 sudo[130742]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:53 compute-0 sudo[130821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylqgokklqpgbvofizkskfwfsspypekem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063533.2017527-863-107827446808796/AnsiballZ_file.py'
Nov 25 09:38:53 compute-0 sudo[130821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:53 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:53 compute-0 python3.9[130823]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:53 compute-0 sudo[130821]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:54 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:54 compute-0 sudo[130974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxolviebiyhtvbqzhpzuyyursucabluy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063534.0917516-899-194346332189406/AnsiballZ_stat.py'
Nov 25 09:38:54 compute-0 sudo[130974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:54 compute-0 ceph-mon[74207]: pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:54 compute-0 python3.9[130976]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:54 compute-0 sudo[130974]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:54 compute-0 sudo[131052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzoenuttuyjosduogmutqtyvcprswiqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063534.0917516-899-194346332189406/AnsiballZ_file.py'
Nov 25 09:38:54 compute-0 sudo[131052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:54.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:54 compute-0 python3.9[131054]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:54 compute-0 sudo[131052]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:55 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:55 compute-0 sudo[131204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhoxlsdzfrmgckwdaklbqquudwghkkwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063534.9614763-935-52687041959870/AnsiballZ_stat.py'
Nov 25 09:38:55 compute-0 sudo[131204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:38:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:38:55 compute-0 python3.9[131206]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:55 compute-0 sudo[131204]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:55.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:55 compute-0 sudo[131282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xebhgfllcweqmxlvdsuoudsikiezlmcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063534.9614763-935-52687041959870/AnsiballZ_file.py'
Nov 25 09:38:55 compute-0 sudo[131282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:55 compute-0 python3.9[131284]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:55 compute-0 sudo[131282]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:55 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:56 compute-0 sudo[131436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjqcvrywtplkyikqvphnrntpwdihqfbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063535.8351638-971-274998396033431/AnsiballZ_stat.py'
Nov 25 09:38:56 compute-0 sudo[131436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:56 compute-0 python3.9[131438]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:56 compute-0 sudo[131436]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:56 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:56 compute-0 sudo[131514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wypjghhnkaxhgccaxnswzgsgiggemxbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063535.8351638-971-274998396033431/AnsiballZ_file.py'
Nov 25 09:38:56 compute-0 sudo[131514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:56 compute-0 ceph-mon[74207]: pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:56 compute-0 python3.9[131516]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:56 compute-0 sudo[131514]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:56.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:56 compute-0 sudo[131666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxrcjarlnfiidixiaxujbnzudiiwqxtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063536.6978323-1007-270867604082800/AnsiballZ_stat.py'
Nov 25 09:38:56 compute-0 sudo[131666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:56.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:56.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:56.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:38:56.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:38:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:57 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:57 compute-0 python3.9[131668]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:38:57 compute-0 sudo[131666]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:57 compute-0 sudo[131744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwokgrnadvaclnhjcozwxsunindbpvgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063536.6978323-1007-270867604082800/AnsiballZ_file.py'
Nov 25 09:38:57 compute-0 sudo[131744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:38:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:57.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:38:57 compute-0 python3.9[131746]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:57 compute-0 sudo[131744]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:38:57 compute-0 sudo[131897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqgyfhncumaxozcvkpsxttecmyyegbys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063537.672071-1046-233085058117954/AnsiballZ_command.py'
Nov 25 09:38:57 compute-0 sudo[131897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:57 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:58 compute-0 python3.9[131899]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:38:58 compute-0 sudo[131897]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:58 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:58 compute-0 ceph-mon[74207]: pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:58 compute-0 sudo[132053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bowalnwysedjqeelsmhyhzkurzjtxgnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063538.1736686-1070-260815976199498/AnsiballZ_blockinfile.py'
Nov 25 09:38:58 compute-0 sudo[132053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:58 compute-0 python3.9[132055]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:58 compute-0 sudo[132053]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:58 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:38:58 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:38:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:58.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:59 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:59 compute-0 sudo[132209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwpshggxlqdbnjwtlhjtmpxcjxqdjayu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063538.9129002-1097-187660855292392/AnsiballZ_file.py'
Nov 25 09:38:59 compute-0 sudo[132209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:38:59 compute-0 python3.9[132211]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:59 compute-0 sudo[132209]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:59 compute-0 sudo[132212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:38:59 compute-0 sudo[132212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:38:59 compute-0 sudo[132212]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:38:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:38:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:59.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:38:59 compute-0 sudo[132386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgolyqrfdvpuozggpvubcwisegbdcyrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063539.3598158-1097-235112832346562/AnsiballZ_file.py'
Nov 25 09:38:59 compute-0 sudo[132386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:38:59 compute-0 python3.9[132388]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:38:59 compute-0 sudo[132386]: pam_unix(sudo:session): session closed for user root
Nov 25 09:38:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:38:59 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:38:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:38:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:00 compute-0 sudo[132540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdjkcahexqonvgsuxxnwyvguucsydaqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063539.863104-1142-84100972093586/AnsiballZ_mount.py'
Nov 25 09:39:00 compute-0 sudo[132540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:39:00] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 25 09:39:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:39:00] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 25 09:39:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:00 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:00 compute-0 python3.9[132542]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 09:39:00 compute-0 sudo[132540]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:00 compute-0 ceph-mon[74207]: pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:00 compute-0 sudo[132692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjrundeoqpaxarmnudawozoggdaszthw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063540.4675272-1142-175626866065486/AnsiballZ_mount.py'
Nov 25 09:39:00 compute-0 sudo[132692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:00.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:00 compute-0 python3.9[132694]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 09:39:00 compute-0 sudo[132692]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:01 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc074003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:39:01 compute-0 sshd-session[124257]: Connection closed by 192.168.122.30 port 40008
Nov 25 09:39:01 compute-0 sshd-session[124253]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:39:01 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 25 09:39:01 compute-0 systemd-logind[744]: Session 44 logged out. Waiting for processes to exit.
Nov 25 09:39:01 compute-0 systemd[1]: session-44.scope: Consumed 20.308s CPU time.
Nov 25 09:39:01 compute-0 systemd-logind[744]: Removed session 44.
Nov 25 09:39:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:01.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:01 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:02 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:02 compute-0 ceph-mon[74207]: pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:39:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:02.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:03 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:03.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:03 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc074004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:04 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:04 compute-0 ceph-mon[74207]: pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:04.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:05 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:05.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:05 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc078002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:06 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:06 compute-0 sshd-session[132725]: Accepted publickey for zuul from 192.168.122.30 port 33280 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:39:06 compute-0 systemd-logind[744]: New session 45 of user zuul.
Nov 25 09:39:06 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 25 09:39:06 compute-0 sshd-session[132725]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:39:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093906 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:39:06 compute-0 ceph-mon[74207]: pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:06.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:06 compute-0 sudo[132878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gylxhntaxnkmllaqqsvfsvnynpnzcnfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063546.478127-18-212448374290148/AnsiballZ_tempfile.py'
Nov 25 09:39:06 compute-0 sudo[132878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:06 compute-0 python3.9[132880]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 09:39:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:06.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:06.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:06.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:06.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:06 compute-0 sudo[132878]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:07 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:07.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:07 compute-0 sudo[133030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtacjpytmpmautaczshjnpbbcwmwdlji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063547.1188467-54-219407745091742/AnsiballZ_stat.py'
Nov 25 09:39:07 compute-0 sudo[133030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:07 compute-0 python3.9[133032]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:39:07 compute-0 sudo[133030]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:07 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:08 compute-0 sudo[133186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzrpnbaenekkyrsczlqvxgpxkwgnvvvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063547.7497241-78-217364873718569/AnsiballZ_slurp.py'
Nov 25 09:39:08 compute-0 sudo[133186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:08 compute-0 python3.9[133188]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 25 09:39:08 compute-0 sudo[133186]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:08 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc078003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:08 compute-0 ceph-mon[74207]: pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:08 compute-0 sudo[133338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iclvmvplonzpfyevygphuxhxdulxkmip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063548.398201-102-230126126842787/AnsiballZ_stat.py'
Nov 25 09:39:08 compute-0 sudo[133338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:08 compute-0 python3.9[133340]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.1kgtjmt3 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:08 compute-0 sudo[133338]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:08.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:09 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:09 compute-0 sudo[133463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgbhgwqwtcwfdsxsgzkopmciimbeiteq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063548.398201-102-230126126842787/AnsiballZ_copy.py'
Nov 25 09:39:09 compute-0 sudo[133463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:09 compute-0 python3.9[133465]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.1kgtjmt3 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063548.398201-102-230126126842787/.source.1kgtjmt3 _original_basename=.y9swqd0n follow=False checksum=719236507bdcc56ceb2be3ce1ef5008b5cfc2235 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:09 compute-0 sudo[133463]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:09.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:09 compute-0 sudo[133616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iolqztkajmpsqhrldybpehqdydsbsvfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063549.4453306-147-131442615974744/AnsiballZ_setup.py'
Nov 25 09:39:09 compute-0 sudo[133616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:09 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:10 compute-0 python3.9[133618]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:39:10 compute-0 sudo[133616]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:39:10] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 25 09:39:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:39:10] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 25 09:39:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:10 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:10 compute-0 ceph-mon[74207]: pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:10 compute-0 sudo[133769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esxeuptjebypudomdgcixjfzskiabeje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063550.3005874-172-267135634421091/AnsiballZ_blockinfile.py'
Nov 25 09:39:10 compute-0 sudo[133769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:10 compute-0 python3.9[133771]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/QqShzRf5Fxs30q3tSf7IhrByfRVQwrs4CVW/gcd2Sdcp7tmVXVNFpJc8XlgTmWxcSLbFtAv0HgJOJ3p6/+g394nChAIaM55uhK/RLFqBZ/byiFqEjvN2LkEWuUVdvbZM808GhONJnWQtg70nn99jeLP34zkSD7gsU7cykxF7K7VyeBfeSiuOcyTjXvVfXr9TZxCZMrsb4eWFZAZ4QERXITlLcZthwc0kd17QWJWLo8Ssv4Qu0DtCHtqHO07s7Nz/CpSs0TX5jVM+C+2rAMn+aAZ4J25X8di4ABF5tO27d+ePazRlU5PWjb8n6kdy1B/cjHgvajXOoUPb5RjyVx2IgULBXaWsIRO23wp8YqiE1OdTly2+Nr5KiTPvR5yqq9C6aBNzS7YyUQc6Rf2RBAaLQbA36NJLGvPUWC7iYVtWdGoTfcTmzqkD2s3hzZl+zU2xNS0IpwByJsOJVIijtGFh1Y45uujq0WUJNPf1ayrY2Z/TV+iO/1iah3JArjyNiq8=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMPD1sScOy6Aiq5PZkl3KepHqJnvlMIZW4R0DzMl4b3w
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO/iVb5vehoW1eqrk4jdR3j25kacpoWkaPIq4PHAndTN4lXAEwSRab7iUqXkAAaYvUnrCJ86WUoAYGkII0QB5wA=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSE1VMIuB9MiQ17/QHDRAbfwrBNbTb+wZH1rCqeQvAxcHqZYp6TugJnyWX+nah5oDk8vz2PCIUW2lm/tVgP4Y2JHeaN2uMNgVnz1WtD6lCQORMYi1R+KpBgiAQoZAjAyC5Ugx5LWbDvrwtpt0zi2DEgCr2Zao5DG5UAaIcs7/Rj2LRx3hgA4jJ9xJKHVi5bUZfjIlWxLzVXVYT+dvUNrZoiVMBcaUMZRpU4tJ/76mE2jbqsfHEPFwHZ6ljoIegFbzNYoKYMCPK+DeOs/73xD4r/nzeQOK3IQzMOEEVaUYvceA+EPX4M+MrKfkNrJwf35qTOFJpb368gJsebA9uXjzPfzX/uh1atxLv5SihEzC5fHdiZ3BZ3wLEy0C7lvXyRBZdQx+anEYQnDepM/ThOT4YR2BNSCdRS2OpzeSJDS+o5CS++zCqWM4yI3lufZm8O8JqPEblV518196TSyMlAOzPbjEjrUaYGdljY5S2OzKA4PBJW4hW4RyBtjcZWJBpNlM=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBoG9NSSqw98oHfgpW8u+wJYHDhMiOjIhpCElLIROYdO
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFL1noqwoCl3YzxWiRl0GcsDxYERT1o8e2TvLqUkxWuv8xj0oHuq7+GhcKu7HpiCls71ko7MDcOX4zteG544k4=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBYH+LEkGk38QCoX+uCPb3zHk7+XCeEWV22HpalqUrYF70U5Myra5/E2/v2kioqGNh5TR9q+A7kNO0JU78Ai+6UBv5aJlbEptu33E5t38qiAv3rpyypYwQ8PdWBl7OCeDcqz0EyYAZEw7rLbCWimqRhYsSXuUND+rRboiuI8DEX229oAgnRmIjyPJTTdKGiM3FTdl9YiSbYNyBykzJ8AugCfme4+hmds+8LJloh2aJjRJCs3/GvxdaGJcjBWAqN3Aurg+gPekKe4fwmOir2+KpqBDQE9YMfiBvraaCMGrDXkAjPdsycsvGMsWckhOgEW5qpTIt+ca5kcrK43ChAH5R/PpHlHnEYqw2o26BLmqIejfmXKRSxmH/Fq9Ldj3DMLJr4NTFBfJAl8wqsUKs6/0jngwOCYz6NLs7GgGZLMYv6wbRVgUpCc4ikQ8f1EDmXTdtqxef+QdmLTgWY1qCqe5lL8BcDDCjOTLJ6bbLUAdubY1z4vb6SFVcamH4SkSCFxs=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGHCQQOw3EbtZ2XAFA2gGrEnb7MaEAFwIJjyskket7pD
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFP8ctNKDLqIcODtgMol02WD/NgFM5ja/WeN20e07JH/Mz/Ge/v2/ybsY8LOtiyzixlX47XT8hWBR4IBwS2uvfM=
                                              create=True mode=0644 path=/tmp/ansible.1kgtjmt3 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:10 compute-0 sudo[133769]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:10.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:11 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc078003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:39:11 compute-0 sudo[133921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwhtgtmzdzgigttsmnjngzovytfovdek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063550.9488022-196-2591694708281/AnsiballZ_command.py'
Nov 25 09:39:11 compute-0 sudo[133921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:11.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:11 compute-0 python3.9[133923]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.1kgtjmt3' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:39:11 compute-0 sudo[133921]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:11 compute-0 sudo[134076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efwrgvjuonxzkcqeulimhmtdxocvfoak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063551.5740058-220-85746491708512/AnsiballZ_file.py'
Nov 25 09:39:11 compute-0 sudo[134076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:11 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:12 compute-0 python3.9[134078]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.1kgtjmt3 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:12 compute-0 sudo[134076]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:12 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:12 compute-0 sshd-session[132728]: Connection closed by 192.168.122.30 port 33280
Nov 25 09:39:12 compute-0 sshd-session[132725]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:39:12 compute-0 systemd-logind[744]: Session 45 logged out. Waiting for processes to exit.
Nov 25 09:39:12 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 25 09:39:12 compute-0 systemd[1]: session-45.scope: Consumed 3.230s CPU time.
Nov 25 09:39:12 compute-0 systemd-logind[744]: Removed session 45.
Nov 25 09:39:12 compute-0 ceph-mon[74207]: pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:39:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:12.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:13 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:39:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:13.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:13 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc078003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:39:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:14 compute-0 ceph-mon[74207]: pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:39:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:14.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:14 compute-0 systemd[92827]: Created slice User Background Tasks Slice.
Nov 25 09:39:14 compute-0 systemd[92827]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 09:39:14 compute-0 systemd[92827]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 09:39:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:39:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:39:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:39:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:39:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:39:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:39:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:39:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:15 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:39:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:15.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:15 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:16 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:16 compute-0 ceph-mon[74207]: pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:39:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:16.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:16.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:16.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:16.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:16.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:17 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:39:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:17 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:39:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:17 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:39:17 compute-0 sshd-session[134109]: Accepted publickey for zuul from 192.168.122.30 port 49744 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:39:17 compute-0 systemd-logind[744]: New session 46 of user zuul.
Nov 25 09:39:17 compute-0 systemd[1]: Started Session 46 of User zuul.
Nov 25 09:39:17 compute-0 sshd-session[134109]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:39:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:17.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:17 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:18 compute-0 python3.9[134264]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:39:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:18 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0780045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:18 compute-0 ceph-mon[74207]: pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:39:18 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 09:39:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:18.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:18 compute-0 sudo[134420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmbdipyjryuwlgueurejmgpssaodipid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063558.4763439-56-249806211192805/AnsiballZ_systemd.py'
Nov 25 09:39:18 compute-0 sudo[134420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:19 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:19 compute-0 python3.9[134422]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 09:39:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:39:19 compute-0 sudo[134420]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:19 compute-0 sudo[134472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:19 compute-0 sudo[134472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:19.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:19 compute-0 sudo[134472]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:19 compute-0 sudo[134599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrqtvfhkkbmjhrhkcndiwevcdzlxkrlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063559.333524-80-100480372717340/AnsiballZ_systemd.py'
Nov 25 09:39:19 compute-0 sudo[134599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:19 compute-0 python3.9[134601]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:39:19 compute-0 sudo[134599]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:19 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0780045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:39:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:39:20] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 25 09:39:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:39:20] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 25 09:39:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:20 compute-0 sudo[134754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzhczdlltbieeugdrlwuqddktylybpvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063560.050115-107-171512391320171/AnsiballZ_command.py'
Nov 25 09:39:20 compute-0 sudo[134754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:20 compute-0 python3.9[134756]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:39:20 compute-0 ceph-mon[74207]: pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:39:20 compute-0 sudo[134754]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:20.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:21 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:39:21 compute-0 sudo[134907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-indvasxttdyrjancficzvqudgewslosl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063560.7314818-131-88788786826185/AnsiballZ_stat.py'
Nov 25 09:39:21 compute-0 sudo[134907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:21.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:21 compute-0 python3.9[134909]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:39:21 compute-0 sudo[134907]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:21 compute-0 sudo[135060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xizzinfyqkouspjrfifuprtdbasmxpnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063561.5670254-158-46807521734527/AnsiballZ_file.py'
Nov 25 09:39:21 compute-0 sudo[135060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:21 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:22 compute-0 python3.9[135062]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:22 compute-0 sudo[135060]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:22 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0780045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:22 compute-0 sshd-session[134112]: Connection closed by 192.168.122.30 port 49744
Nov 25 09:39:22 compute-0 sshd-session[134109]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:39:22 compute-0 systemd-logind[744]: Session 46 logged out. Waiting for processes to exit.
Nov 25 09:39:22 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Nov 25 09:39:22 compute-0 systemd[1]: session-46.scope: Consumed 2.725s CPU time.
Nov 25 09:39:22 compute-0 systemd-logind[744]: Removed session 46.
Nov 25 09:39:22 compute-0 ceph-mon[74207]: pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:39:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:22.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:23 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:39:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:23.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:23 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:24 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:24 compute-0 ceph-mon[74207]: pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:39:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:24.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:25 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0780056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:39:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:39:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:25.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:39:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:25 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:26 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc05c004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093926 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:39:26 compute-0 ceph-mon[74207]: pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:39:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:26.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:26.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:26.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:26.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:26.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:39:27 compute-0 sshd-session[135092]: Accepted publickey for zuul from 192.168.122.30 port 60812 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:39:27 compute-0 systemd-logind[744]: New session 47 of user zuul.
Nov 25 09:39:27 compute-0 systemd[1]: Started Session 47 of User zuul.
Nov 25 09:39:27 compute-0 sshd-session[135092]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:39:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:39:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:27.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:39:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:28 compute-0 python3.9[135247]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:39:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:28 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:28 compute-0 ceph-mon[74207]: pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:39:28 compute-0 sudo[135401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-patakpfpffaheikymmtlzgzatmmyqjaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063568.541123-62-78388180514254/AnsiballZ_setup.py'
Nov 25 09:39:28 compute-0 sudo[135401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:28.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:28 compute-0 python3.9[135403]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:39:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:29 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:29 compute-0 sudo[135401]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:39:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:29.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:29 compute-0 sudo[135485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrcszkzeeaozexubttfqrjbajmvtysyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063568.541123-62-78388180514254/AnsiballZ_dnf.py'
Nov 25 09:39:29 compute-0 sudo[135485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:29 compute-0 python3.9[135487]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 09:39:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:39:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:29 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0780056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:39:30] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Nov 25 09:39:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:39:30] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Nov 25 09:39:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:30 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:30 compute-0 ceph-mon[74207]: pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:39:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:30 compute-0 sudo[135485]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:30.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:31 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:39:31 compute-0 python3.9[135641]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:39:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:31.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:31 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc060001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:32 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0780056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:32 compute-0 python3.9[135794]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 09:39:32 compute-0 ceph-mon[74207]: pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:39:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:39:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:32.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:39:32 compute-0 python3.9[135944]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:39:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:33 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:39:33 compute-0 python3.9[136094]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:39:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:33.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:33 compute-0 sshd-session[135095]: Connection closed by 192.168.122.30 port 60812
Nov 25 09:39:33 compute-0 sshd-session[135092]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:39:33 compute-0 systemd-logind[744]: Session 47 logged out. Waiting for processes to exit.
Nov 25 09:39:33 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Nov 25 09:39:33 compute-0 systemd[1]: session-47.scope: Consumed 4.184s CPU time.
Nov 25 09:39:33 compute-0 systemd-logind[744]: Removed session 47.
Nov 25 09:39:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:33 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:34 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:34 compute-0 ceph-mon[74207]: pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:39:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:34.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:35 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:39:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:35.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:35 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:36 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0780056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:36 compute-0 ceph-mon[74207]: pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:39:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:36.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:36.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:36.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:36.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:36.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:37 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0780056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:39:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:37.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:37 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:38 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:38 compute-0 sshd-session[136125]: Accepted publickey for zuul from 192.168.122.30 port 54926 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:39:38 compute-0 systemd-logind[744]: New session 48 of user zuul.
Nov 25 09:39:38 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 25 09:39:38 compute-0 sshd-session[136125]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:39:38 compute-0 ceph-mon[74207]: pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:39:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:38.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:39 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0780056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:39.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:39 compute-0 python3.9[136278]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:39:39 compute-0 sudo[136279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:39 compute-0 sudo[136279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:39 compute-0 sudo[136279]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:39 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:39:40] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Nov 25 09:39:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:39:40] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Nov 25 09:39:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:40 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:40 compute-0 ceph-mon[74207]: pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:40 compute-0 sudo[136459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwnjpbirdlpvgucwismmldnqbqrraulp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063580.358621-110-87001176421238/AnsiballZ_file.py'
Nov 25 09:39:40 compute-0 sudo[136459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:40.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:40 compute-0 python3.9[136461]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:40 compute-0 sudo[136459]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:41 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:39:41 compute-0 sudo[136613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwecussrdsehupqbpbimipdtdmdgrahf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063581.0655506-110-68101799708061/AnsiballZ_file.py'
Nov 25 09:39:41 compute-0 sudo[136613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:41.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:41 compute-0 python3.9[136615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:41 compute-0 sudo[136613]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:41 compute-0 sudo[136766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crgjfopqymnivhdasnhrczpzwecwajav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063581.594162-157-151396021309828/AnsiballZ_stat.py'
Nov 25 09:39:41 compute-0 sudo[136766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:41 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:42 compute-0 python3.9[136768]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:42 compute-0 sudo[136766]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:42 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc084002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:42 compute-0 sudo[136890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmkxzyeiwzqqskumzbhgipvcajynfwuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063581.594162-157-151396021309828/AnsiballZ_copy.py'
Nov 25 09:39:42 compute-0 sudo[136890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:42 compute-0 python3.9[136892]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063581.594162-157-151396021309828/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=88349e6524f4eacceb9bc64fe7d5026b43b809dd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:42 compute-0 sudo[136890]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:42 compute-0 ceph-mon[74207]: pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:39:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:42.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:42 compute-0 sudo[137042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raqkybhjilmucrqefrsmcguzhcguedvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063582.6608663-157-228202614643334/AnsiballZ_stat.py'
Nov 25 09:39:42 compute-0 sudo[137042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:42 compute-0 python3.9[137044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:43 compute-0 sudo[137042]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:43 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:43 compute-0 sudo[137165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajnylpleunilpsmxohitvrwyepnnrurd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063582.6608663-157-228202614643334/AnsiballZ_copy.py'
Nov 25 09:39:43 compute-0 sudo[137165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:43.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:43 compute-0 python3.9[137167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063582.6608663-157-228202614643334/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d372d1b4272dc98810d1b396448f10f5be8f829f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:43 compute-0 sudo[137165]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:43 compute-0 sudo[137318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjboeiswsmujbkhypysiwavhcbzgtpdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063583.6826484-157-4822153596903/AnsiballZ_stat.py'
Nov 25 09:39:43 compute-0 sudo[137318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:43 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:44 compute-0 python3.9[137320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:44 compute-0 sudo[137318]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:44 compute-0 sudo[137442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpvvwfckveimwebmetghayeoabzcaxsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063583.6826484-157-4822153596903/AnsiballZ_copy.py'
Nov 25 09:39:44 compute-0 sudo[137442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:44 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:44 compute-0 python3.9[137444]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063583.6826484-157-4822153596903/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=85c646b6cd1424cf3447765c4d688ff7bdb51062 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:44 compute-0 sudo[137442]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:44 compute-0 ceph-mon[74207]: pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:44.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:44 compute-0 sudo[137594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mavoxrterwuonwuykumceoifiosmwnij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063584.6401856-290-143577270656146/AnsiballZ_file.py'
Nov 25 09:39:44 compute-0 sudo[137594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:39:44
Nov 25 09:39:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:39:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:39:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'vms', 'images', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.nfs', 'backups']
Nov 25 09:39:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:39:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:39:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:39:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:39:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:39:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:39:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:39:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:39:45 compute-0 python3.9[137596]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:39:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:39:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:39:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:39:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:39:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:39:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:39:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:39:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:39:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:39:45 compute-0 sudo[137594]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:45 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc084005580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:45 compute-0 sudo[137746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bilrahopdbgevyicesnjthgsxhsltguc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063585.124768-290-93056641721747/AnsiballZ_file.py'
Nov 25 09:39:45 compute-0 sudo[137746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:39:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:45.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:39:45 compute-0 python3.9[137748]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:45 compute-0 sudo[137746]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:45 compute-0 sudo[137899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsuhutbospsjwusxghcigryufauzoxfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063585.670489-335-272709527326002/AnsiballZ_stat.py'
Nov 25 09:39:45 compute-0 sudo[137899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:45 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:46 compute-0 python3.9[137901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:46 compute-0 sudo[137899]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:46 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:46 compute-0 sudo[138023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqadicejrcpnzbttcpetboqgszoumadu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063585.670489-335-272709527326002/AnsiballZ_copy.py'
Nov 25 09:39:46 compute-0 sudo[138023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:46 compute-0 python3.9[138025]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063585.670489-335-272709527326002/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9ed03637d56d93ebe8577fb76a199a5349f6368e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:46 compute-0 sudo[138023]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:46 compute-0 ceph-mon[74207]: pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:46.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:46 compute-0 sudo[138175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjabrtmmehpxofcmkildeaibirhxbppj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063586.7054224-335-5067974307483/AnsiballZ_stat.py'
Nov 25 09:39:46 compute-0 sudo[138175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:46.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:46.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:46.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:46.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:47 compute-0 python3.9[138177]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:47 compute-0 sudo[138175]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:47 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:39:47 compute-0 sudo[138298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsqbhutjjmojivutjqvwgzvleglhoojl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063586.7054224-335-5067974307483/AnsiballZ_copy.py'
Nov 25 09:39:47 compute-0 sudo[138298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:47.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:47 compute-0 python3.9[138300]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063586.7054224-335-5067974307483/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=6089737efa6d9cfbc115be5d2d9f479510a3f2d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:47 compute-0 sudo[138298]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:47 compute-0 sudo[138451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihincbkmfvzagubbzmtrfiwrmrvwawnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063587.568699-335-116205750032855/AnsiballZ_stat.py'
Nov 25 09:39:47 compute-0 sudo[138451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:47 compute-0 python3.9[138453]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:47 compute-0 sudo[138451]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:47 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc084005700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:48 compute-0 sudo[138575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkjyxvmtqymhqmaecttbfmnpyubxkaji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063587.568699-335-116205750032855/AnsiballZ_copy.py'
Nov 25 09:39:48 compute-0 sudo[138575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:48 compute-0 python3.9[138577]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063587.568699-335-116205750032855/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ce7cf0daa1d7235d473c5195d317be3ef5eef493 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:48 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:48 compute-0 sudo[138575]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:48 compute-0 sudo[138748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqdzejuwtwfumgzfvlnjozwzpogwiykb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063588.4721334-474-73473330735547/AnsiballZ_file.py'
Nov 25 09:39:48 compute-0 sudo[138748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:48 compute-0 sudo[138708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:39:48 compute-0 sudo[138708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:48 compute-0 sudo[138708]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:48 compute-0 ceph-mon[74207]: pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:39:48 compute-0 sudo[138755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:39:48 compute-0 sudo[138755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:48.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:48 compute-0 python3.9[138753]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:48 compute-0 sudo[138748]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:49 compute-0 sudo[138755]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:49 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:49 compute-0 sudo[138959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckdkbrgkandkfewbspxwgxjgcltoztog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063588.9318087-474-206828877643341/AnsiballZ_file.py'
Nov 25 09:39:49 compute-0 sudo[138959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:39:49 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:39:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:39:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:39:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:39:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:39:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:39:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:39:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:39:49 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:39:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:39:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:39:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:39:49 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:39:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:49 compute-0 sudo[138962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:39:49 compute-0 sudo[138962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:49 compute-0 sudo[138962]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:49 compute-0 sudo[138987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:39:49 compute-0 sudo[138987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:49 compute-0 python3.9[138961]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:49 compute-0 sudo[138959]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:49.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:49 compute-0 podman[139126]: 2025-11-25 09:39:49.532065541 +0000 UTC m=+0.027444326 container create 176d139cba6785c4e9476af0457dbe4fb50bd38a5ce60ec6bb3cccefe16f7f85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:39:49 compute-0 systemd[1]: Started libpod-conmon-176d139cba6785c4e9476af0457dbe4fb50bd38a5ce60ec6bb3cccefe16f7f85.scope.
Nov 25 09:39:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:39:49 compute-0 podman[139126]: 2025-11-25 09:39:49.588203335 +0000 UTC m=+0.083582151 container init 176d139cba6785c4e9476af0457dbe4fb50bd38a5ce60ec6bb3cccefe16f7f85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dewdney, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:39:49 compute-0 podman[139126]: 2025-11-25 09:39:49.592910625 +0000 UTC m=+0.088289410 container start 176d139cba6785c4e9476af0457dbe4fb50bd38a5ce60ec6bb3cccefe16f7f85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 09:39:49 compute-0 podman[139126]: 2025-11-25 09:39:49.593986552 +0000 UTC m=+0.089365337 container attach 176d139cba6785c4e9476af0457dbe4fb50bd38a5ce60ec6bb3cccefe16f7f85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dewdney, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:39:49 compute-0 youthful_dewdney[139156]: 167 167
Nov 25 09:39:49 compute-0 systemd[1]: libpod-176d139cba6785c4e9476af0457dbe4fb50bd38a5ce60ec6bb3cccefe16f7f85.scope: Deactivated successfully.
Nov 25 09:39:49 compute-0 conmon[139156]: conmon 176d139cba6785c4e947 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-176d139cba6785c4e9476af0457dbe4fb50bd38a5ce60ec6bb3cccefe16f7f85.scope/container/memory.events
Nov 25 09:39:49 compute-0 podman[139126]: 2025-11-25 09:39:49.597379206 +0000 UTC m=+0.092757991 container died 176d139cba6785c4e9476af0457dbe4fb50bd38a5ce60ec6bb3cccefe16f7f85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dewdney, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:39:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfacb6d0e96bcd7cce92eada729da6d8cfe155f331b33c5c6f5f3a5a08f19012-merged.mount: Deactivated successfully.
Nov 25 09:39:49 compute-0 podman[139126]: 2025-11-25 09:39:49.617508265 +0000 UTC m=+0.112887051 container remove 176d139cba6785c4e9476af0457dbe4fb50bd38a5ce60ec6bb3cccefe16f7f85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dewdney, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:39:49 compute-0 podman[139126]: 2025-11-25 09:39:49.521229482 +0000 UTC m=+0.016608286 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:39:49 compute-0 systemd[1]: libpod-conmon-176d139cba6785c4e9476af0457dbe4fb50bd38a5ce60ec6bb3cccefe16f7f85.scope: Deactivated successfully.
Nov 25 09:39:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:39:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:39:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:39:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:39:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:39:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:39:49 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:39:49 compute-0 podman[139179]: 2025-11-25 09:39:49.728595455 +0000 UTC m=+0.027459024 container create df0af8ae0d9fe6289b6ecda025ea161c19d2acd15595d5f5a745ba3b1d5e9bcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:39:49 compute-0 systemd[1]: Started libpod-conmon-df0af8ae0d9fe6289b6ecda025ea161c19d2acd15595d5f5a745ba3b1d5e9bcc.scope.
Nov 25 09:39:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f872acfd41d04de06ce75c8b5f34c333a49a2f3e6dbd529550520897566fc162/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f872acfd41d04de06ce75c8b5f34c333a49a2f3e6dbd529550520897566fc162/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f872acfd41d04de06ce75c8b5f34c333a49a2f3e6dbd529550520897566fc162/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f872acfd41d04de06ce75c8b5f34c333a49a2f3e6dbd529550520897566fc162/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f872acfd41d04de06ce75c8b5f34c333a49a2f3e6dbd529550520897566fc162/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:49 compute-0 podman[139179]: 2025-11-25 09:39:49.777342983 +0000 UTC m=+0.076206574 container init df0af8ae0d9fe6289b6ecda025ea161c19d2acd15595d5f5a745ba3b1d5e9bcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 25 09:39:49 compute-0 podman[139179]: 2025-11-25 09:39:49.782929169 +0000 UTC m=+0.081792739 container start df0af8ae0d9fe6289b6ecda025ea161c19d2acd15595d5f5a745ba3b1d5e9bcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_agnesi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:39:49 compute-0 podman[139179]: 2025-11-25 09:39:49.786292116 +0000 UTC m=+0.085155687 container attach df0af8ae0d9fe6289b6ecda025ea161c19d2acd15595d5f5a745ba3b1d5e9bcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_agnesi, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:39:49 compute-0 sudo[139247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceelrsppkucghjtoucxcfixhlqmokhie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063589.422851-518-208022861451843/AnsiballZ_stat.py'
Nov 25 09:39:49 compute-0 sudo[139247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:49 compute-0 podman[139179]: 2025-11-25 09:39:49.717955326 +0000 UTC m=+0.016818915 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:39:49 compute-0 python3.9[139249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:49 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:49 compute-0 sudo[139247]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:50 compute-0 frosty_agnesi[139216]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:39:50 compute-0 frosty_agnesi[139216]: --> All data devices are unavailable
Nov 25 09:39:50 compute-0 systemd[1]: libpod-df0af8ae0d9fe6289b6ecda025ea161c19d2acd15595d5f5a745ba3b1d5e9bcc.scope: Deactivated successfully.
Nov 25 09:39:50 compute-0 podman[139179]: 2025-11-25 09:39:50.052023922 +0000 UTC m=+0.350887492 container died df0af8ae0d9fe6289b6ecda025ea161c19d2acd15595d5f5a745ba3b1d5e9bcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f872acfd41d04de06ce75c8b5f34c333a49a2f3e6dbd529550520897566fc162-merged.mount: Deactivated successfully.
Nov 25 09:39:50 compute-0 podman[139179]: 2025-11-25 09:39:50.076615141 +0000 UTC m=+0.375478711 container remove df0af8ae0d9fe6289b6ecda025ea161c19d2acd15595d5f5a745ba3b1d5e9bcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_agnesi, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:39:50 compute-0 systemd[1]: libpod-conmon-df0af8ae0d9fe6289b6ecda025ea161c19d2acd15595d5f5a745ba3b1d5e9bcc.scope: Deactivated successfully.
Nov 25 09:39:50 compute-0 sudo[138987]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:50 compute-0 sudo[139326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:39:50 compute-0 sudo[139326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:50 compute-0 sudo[139326]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:50 compute-0 sudo[139370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:39:50 compute-0 sudo[139370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:50 compute-0 sudo[139441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esgrjozbkyjvnxlioixhpedscsrwsjrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063589.422851-518-208022861451843/AnsiballZ_copy.py'
Nov 25 09:39:50 compute-0 sudo[139441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:39:50] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 25 09:39:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:39:50] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 25 09:39:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:50 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc084006aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:50 compute-0 python3.9[139443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063589.422851-518-208022861451843/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e7a80951ee39b9750062a4136e5d4afb4f474b68 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:50 compute-0 sudo[139441]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/093950 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:39:50 compute-0 podman[139492]: 2025-11-25 09:39:50.488855 +0000 UTC m=+0.028984007 container create 0100e3fe82b55b67afedd9349a3a55c6fd8f2ddcf05c552d8ea485693b438078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Nov 25 09:39:50 compute-0 systemd[1]: Started libpod-conmon-0100e3fe82b55b67afedd9349a3a55c6fd8f2ddcf05c552d8ea485693b438078.scope.
Nov 25 09:39:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:39:50 compute-0 podman[139492]: 2025-11-25 09:39:50.531332584 +0000 UTC m=+0.071461601 container init 0100e3fe82b55b67afedd9349a3a55c6fd8f2ddcf05c552d8ea485693b438078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 09:39:50 compute-0 podman[139492]: 2025-11-25 09:39:50.535532628 +0000 UTC m=+0.075661635 container start 0100e3fe82b55b67afedd9349a3a55c6fd8f2ddcf05c552d8ea485693b438078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:39:50 compute-0 podman[139492]: 2025-11-25 09:39:50.536511121 +0000 UTC m=+0.076640129 container attach 0100e3fe82b55b67afedd9349a3a55c6fd8f2ddcf05c552d8ea485693b438078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_feynman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:39:50 compute-0 nostalgic_feynman[139526]: 167 167
Nov 25 09:39:50 compute-0 systemd[1]: libpod-0100e3fe82b55b67afedd9349a3a55c6fd8f2ddcf05c552d8ea485693b438078.scope: Deactivated successfully.
Nov 25 09:39:50 compute-0 conmon[139526]: conmon 0100e3fe82b55b67afed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0100e3fe82b55b67afedd9349a3a55c6fd8f2ddcf05c552d8ea485693b438078.scope/container/memory.events
Nov 25 09:39:50 compute-0 podman[139492]: 2025-11-25 09:39:50.539586417 +0000 UTC m=+0.079715424 container died 0100e3fe82b55b67afedd9349a3a55c6fd8f2ddcf05c552d8ea485693b438078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-211a8263f4c87edd3bd6c5bba3e891f086be0c6e3b6d5413a5c03e57f16644f0-merged.mount: Deactivated successfully.
Nov 25 09:39:50 compute-0 podman[139492]: 2025-11-25 09:39:50.557323499 +0000 UTC m=+0.097452507 container remove 0100e3fe82b55b67afedd9349a3a55c6fd8f2ddcf05c552d8ea485693b438078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_feynman, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:39:50 compute-0 podman[139492]: 2025-11-25 09:39:50.47759849 +0000 UTC m=+0.017727507 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:39:50 compute-0 systemd[1]: libpod-conmon-0100e3fe82b55b67afedd9349a3a55c6fd8f2ddcf05c552d8ea485693b438078.scope: Deactivated successfully.
Nov 25 09:39:50 compute-0 ceph-mon[74207]: pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:50 compute-0 podman[139609]: 2025-11-25 09:39:50.670023889 +0000 UTC m=+0.028736660 container create a7ae2c1ca92f4f4304c0e0fa29e8119300a6a51198a8b03cdb96ed369dfdf1ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 25 09:39:50 compute-0 systemd[1]: Started libpod-conmon-a7ae2c1ca92f4f4304c0e0fa29e8119300a6a51198a8b03cdb96ed369dfdf1ea.scope.
Nov 25 09:39:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3020c9a1f3bbb0215f88f19890ee3c1341e83c5cf119880ba5f7d14125270fbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3020c9a1f3bbb0215f88f19890ee3c1341e83c5cf119880ba5f7d14125270fbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3020c9a1f3bbb0215f88f19890ee3c1341e83c5cf119880ba5f7d14125270fbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3020c9a1f3bbb0215f88f19890ee3c1341e83c5cf119880ba5f7d14125270fbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:50 compute-0 podman[139609]: 2025-11-25 09:39:50.731449657 +0000 UTC m=+0.090162429 container init a7ae2c1ca92f4f4304c0e0fa29e8119300a6a51198a8b03cdb96ed369dfdf1ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Nov 25 09:39:50 compute-0 podman[139609]: 2025-11-25 09:39:50.738746046 +0000 UTC m=+0.097458818 container start a7ae2c1ca92f4f4304c0e0fa29e8119300a6a51198a8b03cdb96ed369dfdf1ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ishizaka, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:39:50 compute-0 podman[139609]: 2025-11-25 09:39:50.742947834 +0000 UTC m=+0.101660606 container attach a7ae2c1ca92f4f4304c0e0fa29e8119300a6a51198a8b03cdb96ed369dfdf1ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ishizaka, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:39:50 compute-0 podman[139609]: 2025-11-25 09:39:50.658376621 +0000 UTC m=+0.017089392 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:39:50 compute-0 sudo[139677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwvoxwwlqxoxmgakarchghibbjvzyjda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063590.5331979-518-11970303739602/AnsiballZ_stat.py'
Nov 25 09:39:50 compute-0 sudo[139677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:50.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:50 compute-0 python3.9[139679]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:50 compute-0 sudo[139677]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]: {
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:     "1": [
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:         {
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             "devices": [
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "/dev/loop3"
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             ],
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             "lv_name": "ceph_lv0",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             "lv_size": "21470642176",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             "name": "ceph_lv0",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             "tags": {
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.cluster_name": "ceph",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.crush_device_class": "",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.encrypted": "0",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.osd_id": "1",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.type": "block",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.vdo": "0",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:                 "ceph.with_tpm": "0"
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             },
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             "type": "block",
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:             "vg_name": "ceph_vg0"
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:         }
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]:     ]
Nov 25 09:39:50 compute-0 trusting_ishizaka[139646]: }
Nov 25 09:39:50 compute-0 systemd[1]: libpod-a7ae2c1ca92f4f4304c0e0fa29e8119300a6a51198a8b03cdb96ed369dfdf1ea.scope: Deactivated successfully.
Nov 25 09:39:50 compute-0 podman[139609]: 2025-11-25 09:39:50.974020759 +0000 UTC m=+0.332733531 container died a7ae2c1ca92f4f4304c0e0fa29e8119300a6a51198a8b03cdb96ed369dfdf1ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ishizaka, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3020c9a1f3bbb0215f88f19890ee3c1341e83c5cf119880ba5f7d14125270fbc-merged.mount: Deactivated successfully.
Nov 25 09:39:50 compute-0 podman[139609]: 2025-11-25 09:39:50.997757898 +0000 UTC m=+0.356470670 container remove a7ae2c1ca92f4f4304c0e0fa29e8119300a6a51198a8b03cdb96ed369dfdf1ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ishizaka, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:39:51 compute-0 systemd[1]: libpod-conmon-a7ae2c1ca92f4f4304c0e0fa29e8119300a6a51198a8b03cdb96ed369dfdf1ea.scope: Deactivated successfully.
Nov 25 09:39:51 compute-0 sudo[139370]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:51 compute-0 sudo[139740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:39:51 compute-0 sudo[139740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:51 compute-0 sudo[139740]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:51 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:51 compute-0 sudo[139788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:39:51 compute-0 sudo[139788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:51 compute-0 sudo[139863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjdaaevknluruniwhmmopiofaaoorleh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063590.5331979-518-11970303739602/AnsiballZ_copy.py'
Nov 25 09:39:51 compute-0 sudo[139863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:39:51 compute-0 python3.9[139865]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063590.5331979-518-11970303739602/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=6089737efa6d9cfbc115be5d2d9f479510a3f2d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:51 compute-0 sudo[139863]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:51.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:51 compute-0 podman[139899]: 2025-11-25 09:39:51.404665901 +0000 UTC m=+0.028286402 container create af09d0d013ce0146dc10cfcd5ae3304f3f859503cf6b2e5c51783f42087e9ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_montalcini, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:39:51 compute-0 systemd[1]: Started libpod-conmon-af09d0d013ce0146dc10cfcd5ae3304f3f859503cf6b2e5c51783f42087e9ab0.scope.
Nov 25 09:39:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:39:51 compute-0 podman[139899]: 2025-11-25 09:39:51.45311655 +0000 UTC m=+0.076737081 container init af09d0d013ce0146dc10cfcd5ae3304f3f859503cf6b2e5c51783f42087e9ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:39:51 compute-0 podman[139899]: 2025-11-25 09:39:51.458699561 +0000 UTC m=+0.082320061 container start af09d0d013ce0146dc10cfcd5ae3304f3f859503cf6b2e5c51783f42087e9ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 25 09:39:51 compute-0 podman[139899]: 2025-11-25 09:39:51.459875366 +0000 UTC m=+0.083495858 container attach af09d0d013ce0146dc10cfcd5ae3304f3f859503cf6b2e5c51783f42087e9ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 25 09:39:51 compute-0 busy_montalcini[139936]: 167 167
Nov 25 09:39:51 compute-0 systemd[1]: libpod-af09d0d013ce0146dc10cfcd5ae3304f3f859503cf6b2e5c51783f42087e9ab0.scope: Deactivated successfully.
Nov 25 09:39:51 compute-0 podman[139899]: 2025-11-25 09:39:51.462974015 +0000 UTC m=+0.086594515 container died af09d0d013ce0146dc10cfcd5ae3304f3f859503cf6b2e5c51783f42087e9ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_montalcini, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:39:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0403323cca15a002ed31a54696365108d4b831109ffe1323ca5ea2766c0aa0af-merged.mount: Deactivated successfully.
Nov 25 09:39:51 compute-0 podman[139899]: 2025-11-25 09:39:51.486498804 +0000 UTC m=+0.110119306 container remove af09d0d013ce0146dc10cfcd5ae3304f3f859503cf6b2e5c51783f42087e9ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:39:51 compute-0 podman[139899]: 2025-11-25 09:39:51.393162185 +0000 UTC m=+0.016782706 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:39:51 compute-0 systemd[1]: libpod-conmon-af09d0d013ce0146dc10cfcd5ae3304f3f859503cf6b2e5c51783f42087e9ab0.scope: Deactivated successfully.
Nov 25 09:39:51 compute-0 podman[140014]: 2025-11-25 09:39:51.602119416 +0000 UTC m=+0.030833533 container create 64c7b495edf754f3bdb809d833af7c4a4184548667fae883f5b6191cf0afbbcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_visvesvaraya, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 25 09:39:51 compute-0 systemd[1]: Started libpod-conmon-64c7b495edf754f3bdb809d833af7c4a4184548667fae883f5b6191cf0afbbcf.scope.
Nov 25 09:39:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686143b334f92b9866e5abd635970348162efd054401a033b5c60e13c4376c5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686143b334f92b9866e5abd635970348162efd054401a033b5c60e13c4376c5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686143b334f92b9866e5abd635970348162efd054401a033b5c60e13c4376c5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686143b334f92b9866e5abd635970348162efd054401a033b5c60e13c4376c5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:39:51 compute-0 podman[140014]: 2025-11-25 09:39:51.65323703 +0000 UTC m=+0.081951147 container init 64c7b495edf754f3bdb809d833af7c4a4184548667fae883f5b6191cf0afbbcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_visvesvaraya, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:39:51 compute-0 podman[140014]: 2025-11-25 09:39:51.66063013 +0000 UTC m=+0.089344238 container start 64c7b495edf754f3bdb809d833af7c4a4184548667fae883f5b6191cf0afbbcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:39:51 compute-0 podman[140014]: 2025-11-25 09:39:51.662714289 +0000 UTC m=+0.091428396 container attach 64c7b495edf754f3bdb809d833af7c4a4184548667fae883f5b6191cf0afbbcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:39:51 compute-0 podman[140014]: 2025-11-25 09:39:51.591024599 +0000 UTC m=+0.019738726 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:39:51 compute-0 sudo[140103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwjffnhjyfrzaaknhvfwiclchpdeubtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063591.4966464-518-151863394683070/AnsiballZ_stat.py'
Nov 25 09:39:51 compute-0 sudo[140103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:51 compute-0 python3.9[140105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:51 compute-0 sudo[140103]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:51 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:52 compute-0 sudo[140295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkqgfgkoazrrkscpmbzqjcucyevyfjch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063591.4966464-518-151863394683070/AnsiballZ_copy.py'
Nov 25 09:39:52 compute-0 sudo[140295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:52 compute-0 recursing_visvesvaraya[140065]: {}
Nov 25 09:39:52 compute-0 lvm[140302]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:39:52 compute-0 lvm[140302]: VG ceph_vg0 finished
Nov 25 09:39:52 compute-0 systemd[1]: libpod-64c7b495edf754f3bdb809d833af7c4a4184548667fae883f5b6191cf0afbbcf.scope: Deactivated successfully.
Nov 25 09:39:52 compute-0 podman[140014]: 2025-11-25 09:39:52.16602049 +0000 UTC m=+0.594734597 container died 64c7b495edf754f3bdb809d833af7c4a4184548667fae883f5b6191cf0afbbcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:39:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-686143b334f92b9866e5abd635970348162efd054401a033b5c60e13c4376c5e-merged.mount: Deactivated successfully.
Nov 25 09:39:52 compute-0 podman[140014]: 2025-11-25 09:39:52.190688153 +0000 UTC m=+0.619402260 container remove 64c7b495edf754f3bdb809d833af7c4a4184548667fae883f5b6191cf0afbbcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:39:52 compute-0 systemd[1]: libpod-conmon-64c7b495edf754f3bdb809d833af7c4a4184548667fae883f5b6191cf0afbbcf.scope: Deactivated successfully.
Nov 25 09:39:52 compute-0 sudo[139788]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:39:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:39:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:39:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:39:52 compute-0 sudo[140312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:39:52 compute-0 sudo[140312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:52 compute-0 sudo[140312]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:52 compute-0 python3.9[140298]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063591.4966464-518-151863394683070/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1a8bac1e5a79d6296074849f2dd6c699a0e751b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:52 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:52 compute-0 sudo[140295]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:52 compute-0 ceph-mon[74207]: pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:39:52 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:39:52 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:39:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:52.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:53 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:53 compute-0 sudo[140486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhojcuxtvdqxpieomxolqrgfjyctffus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063593.152371-708-234222127224454/AnsiballZ_file.py'
Nov 25 09:39:53 compute-0 sudo[140486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:53.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:53 compute-0 python3.9[140488]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:53 compute-0 sudo[140486]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:53 compute-0 sudo[140639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqceycqbargcihtpaslllaagyjjfmfds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063593.617632-733-91660832137593/AnsiballZ_stat.py'
Nov 25 09:39:53 compute-0 sudo[140639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:53 compute-0 python3.9[140641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:53 compute-0 sudo[140639]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:53 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:54 compute-0 sudo[140764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbznxndfalqpbugvynrxwxhpxnozxruo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063593.617632-733-91660832137593/AnsiballZ_copy.py'
Nov 25 09:39:54 compute-0 sudo[140764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:54 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:54 compute-0 python3.9[140766]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063593.617632-733-91660832137593/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:54 compute-0 sudo[140764]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:54 compute-0 ceph-mon[74207]: pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:54 compute-0 sudo[140916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkgmcyecdbpdzezjkrigevzmxfjhrdmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063594.548485-778-89550307170502/AnsiballZ_file.py'
Nov 25 09:39:54 compute-0 sudo[140916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:39:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:54.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:39:54 compute-0 python3.9[140918]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:54 compute-0 sudo[140916]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:55 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:55 compute-0 sudo[141068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-howgzyowwdmtdcwklnsyzzfchmlprdit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063595.0105164-804-128834889771613/AnsiballZ_stat.py'
Nov 25 09:39:55 compute-0 sudo[141068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:39:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:39:55 compute-0 python3.9[141070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:55 compute-0 sudo[141068]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:55.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:55 compute-0 sudo[141192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkhnxefrqwaovzbzkdefpqyeomgdyxii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063595.0105164-804-128834889771613/AnsiballZ_copy.py'
Nov 25 09:39:55 compute-0 sudo[141192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:55 compute-0 python3.9[141194]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063595.0105164-804-128834889771613/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:55 compute-0 sudo[141192]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:55 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc084006a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:56 compute-0 sudo[141345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsnpoliyzkylynwihnuhuspibjciwcoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063596.074674-847-25497779098271/AnsiballZ_file.py'
Nov 25 09:39:56 compute-0 sudo[141345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:56 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:56 compute-0 python3.9[141347]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:56 compute-0 sudo[141345]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:56 compute-0 ceph-mon[74207]: pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:39:56 compute-0 sudo[141497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rplgdmnpgjwgbkcfmwsnhirlgnzkbtvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063596.5416222-873-53944309743425/AnsiballZ_stat.py'
Nov 25 09:39:56 compute-0 sudo[141497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:39:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:56.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:39:56 compute-0 python3.9[141499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:56 compute-0 sudo[141497]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:56.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:56.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:56.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:39:56.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:39:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:57 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:57 compute-0 sudo[141620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxuejekdgxfocxtucjraumzacxsjqnov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063596.5416222-873-53944309743425/AnsiballZ_copy.py'
Nov 25 09:39:57 compute-0 sudo[141620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:39:57 compute-0 python3.9[141622]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063596.5416222-873-53944309743425/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:57 compute-0 sudo[141620]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:57.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:57 compute-0 sudo[141773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkxlvjvlvmigasxpntsobksgpcviitcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063597.4832573-919-161665281143419/AnsiballZ_file.py'
Nov 25 09:39:57 compute-0 sudo[141773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:39:57 compute-0 python3.9[141775]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:57 compute-0 sudo[141773]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:57 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:58 compute-0 sudo[141926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrubwwfvieemnrnnhqdcaxlspnpfnlhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063597.9665833-943-228287975835836/AnsiballZ_stat.py'
Nov 25 09:39:58 compute-0 sudo[141926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:58 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:39:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:58 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc084006a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:58 compute-0 python3.9[141928]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:58 compute-0 sudo[141926]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:58 compute-0 sudo[142049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqrxyrtwrhwfaxmxktlowpstpsycuhbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063597.9665833-943-228287975835836/AnsiballZ_copy.py'
Nov 25 09:39:58 compute-0 sudo[142049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:58 compute-0 sshd-session[71238]: Received disconnect from 192.168.26.191 port 39028:11: disconnected by user
Nov 25 09:39:58 compute-0 sshd-session[71238]: Disconnected from user zuul 192.168.26.191 port 39028
Nov 25 09:39:58 compute-0 sshd-session[71235]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:39:58 compute-0 systemd-logind[744]: Session 18 logged out. Waiting for processes to exit.
Nov 25 09:39:58 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 25 09:39:58 compute-0 systemd[1]: session-18.scope: Consumed 1min 10.057s CPU time.
Nov 25 09:39:58 compute-0 systemd-logind[744]: Removed session 18.
Nov 25 09:39:58 compute-0 ceph-mon[74207]: pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:39:58 compute-0 python3.9[142051]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063597.9665833-943-228287975835836/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:39:58 compute-0 sudo[142049]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:58.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:59 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:39:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:39:59 compute-0 sudo[142201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvlqvnfbvmxrnxrbmvtodoafpmufdrun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063599.0415275-994-136419624300349/AnsiballZ_file.py'
Nov 25 09:39:59 compute-0 sudo[142201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:59 compute-0 python3.9[142203]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:39:59 compute-0 sudo[142201]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:39:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:39:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:59.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:39:59 compute-0 sudo[142225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:39:59 compute-0 sudo[142225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:39:59 compute-0 sudo[142225]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:59 compute-0 sudo[142379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxlqqfeacdtuzygiomthtnawhxslqqrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063599.5188184-1018-189025127400735/AnsiballZ_stat.py'
Nov 25 09:39:59 compute-0 sudo[142379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:39:59 compute-0 python3.9[142381]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:39:59 compute-0 sudo[142379]: pam_unix(sudo:session): session closed for user root
Nov 25 09:39:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:39:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:39:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:39:59 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:00 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 25 09:40:00 compute-0 sudo[142503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-craepnqubcecswgliewucimvakcxlkot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063599.5188184-1018-189025127400735/AnsiballZ_copy.py'
Nov 25 09:40:00 compute-0 sudo[142503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:40:00] "GET /metrics HTTP/1.1" 200 48327 "" "Prometheus/2.51.0"
Nov 25 09:40:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:40:00] "GET /metrics HTTP/1.1" 200 48327 "" "Prometheus/2.51.0"
Nov 25 09:40:00 compute-0 python3.9[142505]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063599.5188184-1018-189025127400735/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:00 compute-0 sudo[142503]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:00 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:00 compute-0 sudo[142655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmkgzaxrhktclghahtdfkiprzrjhzacf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063600.4211278-1064-54557528505614/AnsiballZ_file.py'
Nov 25 09:40:00 compute-0 sudo[142655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:00 compute-0 ceph-mon[74207]: pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:40:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:40:00 compute-0 ceph-mon[74207]: overall HEALTH_OK
Nov 25 09:40:00 compute-0 python3.9[142657]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:00 compute-0 sudo[142655]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:00.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:01 compute-0 sudo[142807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmekmdjhpwonbsyaqrsvxyxxndzpggjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063600.8986356-1090-205482711670706/AnsiballZ_stat.py'
Nov 25 09:40:01 compute-0 sudo[142807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:01 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840074b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:40:01 compute-0 python3.9[142809]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:01 compute-0 sudo[142807]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:01 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:40:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:01 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:40:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:40:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:01.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:40:01 compute-0 sudo[142930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkyjinvcoxceirawyqemighwdittfxby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063600.8986356-1090-205482711670706/AnsiballZ_copy.py'
Nov 25 09:40:01 compute-0 sudo[142930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:01 compute-0 python3.9[142932]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063600.8986356-1090-205482711670706/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:01 compute-0 sudo[142930]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:01 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:02 compute-0 sshd-session[136128]: Connection closed by 192.168.122.30 port 54926
Nov 25 09:40:02 compute-0 sshd-session[136125]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:40:02 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 25 09:40:02 compute-0 systemd[1]: session-48.scope: Consumed 15.747s CPU time.
Nov 25 09:40:02 compute-0 systemd-logind[744]: Session 48 logged out. Waiting for processes to exit.
Nov 25 09:40:02 compute-0 systemd-logind[744]: Removed session 48.
Nov 25 09:40:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:02 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:02 compute-0 ceph-mon[74207]: pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:40:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:40:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:02.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:40:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:03 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:40:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:40:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:03.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:40:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:03 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:04 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:04 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:40:04 compute-0 ceph-mon[74207]: pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:40:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:40:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:04.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:40:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:05 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:40:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:40:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:05.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:40:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:05 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc084008650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:06 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:06 compute-0 ceph-mon[74207]: pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:40:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:06.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:06.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:06.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:06.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:06.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:07 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:40:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:40:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:07.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:40:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:07 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:08 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:08 compute-0 sshd-session[142965]: Accepted publickey for zuul from 192.168.122.30 port 35600 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:40:08 compute-0 systemd-logind[744]: New session 49 of user zuul.
Nov 25 09:40:08 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 25 09:40:08 compute-0 sshd-session[142965]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:40:08 compute-0 ceph-mon[74207]: pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:40:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:08.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:08 compute-0 sudo[143118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cavwltpcefrgnaepaxfkbqrraahneqkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063608.4777725-26-164802342562501/AnsiballZ_file.py'
Nov 25 09:40:08 compute-0 sudo[143118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:09 compute-0 python3.9[143120]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:09 compute-0 sudo[143118]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:09 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:40:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:40:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:09.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:40:09 compute-0 sudo[143270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmndbksglqsbnzfekdrveslmwcvswvfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063609.1873755-62-56170026948942/AnsiballZ_stat.py'
Nov 25 09:40:09 compute-0 sudo[143270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:09 compute-0 python3.9[143272]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:09 compute-0 sudo[143270]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:09 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:10 compute-0 sudo[143395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opbqtnouoxvotbdjgrxxzguaxmdidvdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063609.1873755-62-56170026948942/AnsiballZ_copy.py'
Nov 25 09:40:10 compute-0 sudo[143395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:10 compute-0 python3.9[143397]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063609.1873755-62-56170026948942/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=366a48c0bc0104e6b502b94bc86d9db21512d98a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:10 compute-0 sudo[143395]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:40:10] "GET /metrics HTTP/1.1" 200 48327 "" "Prometheus/2.51.0"
Nov 25 09:40:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:40:10] "GET /metrics HTTP/1.1" 200 48327 "" "Prometheus/2.51.0"
Nov 25 09:40:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:10 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:10 compute-0 sudo[143547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txaugfulenyffcueeuvypwjynhpmhohs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063610.2993846-62-19952805177310/AnsiballZ_stat.py'
Nov 25 09:40:10 compute-0 sudo[143547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094010 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:40:10 compute-0 python3.9[143549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:10 compute-0 sudo[143547]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:10 compute-0 ceph-mon[74207]: pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:40:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:10.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:10 compute-0 sudo[143670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wulyhabopvqetnrgdxecumkvzofgsgty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063610.2993846-62-19952805177310/AnsiballZ_copy.py'
Nov 25 09:40:10 compute-0 sudo[143670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:10 compute-0 python3.9[143672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063610.2993846-62-19952805177310/.source.conf _original_basename=ceph.conf follow=False checksum=a12b603cb850b5616045745d010769596d2b9016 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:11 compute-0 sudo[143670]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:11 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:40:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:40:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:11.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:40:11 compute-0 sshd-session[142968]: Connection closed by 192.168.122.30 port 35600
Nov 25 09:40:11 compute-0 sshd-session[142965]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:40:11 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 25 09:40:11 compute-0 systemd[1]: session-49.scope: Consumed 1.747s CPU time.
Nov 25 09:40:11 compute-0 systemd-logind[744]: Session 49 logged out. Waiting for processes to exit.
Nov 25 09:40:11 compute-0 systemd-logind[744]: Removed session 49.
Nov 25 09:40:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:11 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:12 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:12 compute-0 ceph-mon[74207]: pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:40:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:40:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:12.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:40:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:13 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:40:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:13.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:13 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:14 compute-0 ceph-mon[74207]: pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:40:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:14.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:40:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:40:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:40:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:40:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:40:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:40:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:40:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:40:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:15 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:40:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:15.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:40:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:15 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:16 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:16 compute-0 ceph-mon[74207]: pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:40:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:16.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:16.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:17.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:17.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:17.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:17 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090005260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:17 compute-0 sshd-session[143704]: Accepted publickey for zuul from 192.168.122.30 port 56440 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:40:17 compute-0 systemd-logind[744]: New session 50 of user zuul.
Nov 25 09:40:17 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 25 09:40:17 compute-0 sshd-session[143704]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:40:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:40:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:17.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:17 compute-0 ceph-mon[74207]: pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:40:17 compute-0 python3.9[143858]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:40:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:17 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:18 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:18 compute-0 sudo[144013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdcdtqklejknutpdbuhlddqgwsezbwzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063618.372982-62-132658985898632/AnsiballZ_file.py'
Nov 25 09:40:18 compute-0 sudo[144013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:18.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:18 compute-0 python3.9[144015]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:18 compute-0 sudo[144013]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:19 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:19 compute-0 sudo[144165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pajjwurtgsyzwqwlriekyvfyfjkobwzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063618.9757764-62-270613157837442/AnsiballZ_file.py'
Nov 25 09:40:19 compute-0 sudo[144165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:40:19 compute-0 python3.9[144167]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:19 compute-0 sudo[144165]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:19.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:19 compute-0 sudo[144236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:19 compute-0 sudo[144236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:19 compute-0 sudo[144236]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:19 compute-0 python3.9[144342]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:40:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:19 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090005260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:40:20] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 25 09:40:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:40:20] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 25 09:40:20 compute-0 ceph-mon[74207]: pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:40:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:20 compute-0 sudo[144494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsgxluafmedrnlfpvllivwphfdjqhghh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063620.2117605-131-176525601371717/AnsiballZ_seboolean.py'
Nov 25 09:40:20 compute-0 sudo[144494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:20 compute-0 python3.9[144496]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 09:40:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:20.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:21 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:40:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:21.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:21 compute-0 sudo[144494]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:21 compute-0 sudo[144652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noioxonwuhciyxfngjmqhetsiqweervo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063621.797481-161-69501874586432/AnsiballZ_setup.py'
Nov 25 09:40:21 compute-0 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 25 09:40:21 compute-0 sudo[144652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:22 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:22 compute-0 python3.9[144654]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:40:22 compute-0 ceph-mon[74207]: pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:40:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:22 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090005400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:22 compute-0 sudo[144652]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:22 compute-0 sudo[144736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpadejsvgchoiedgujolztkuesghajot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063621.797481-161-69501874586432/AnsiballZ_dnf.py'
Nov 25 09:40:22 compute-0 sudo[144736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:22.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:22 compute-0 python3.9[144738]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:40:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:23 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:23.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:23 compute-0 sudo[144736]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:24 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:24 compute-0 ceph-mon[74207]: pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:24 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:24 compute-0 sudo[144891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfkbrjifednmwdubptcrvtrjrfoztkhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063624.0574226-197-174468359142161/AnsiballZ_systemd.py'
Nov 25 09:40:24 compute-0 sudo[144891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:24.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:24 compute-0 python3.9[144893]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:40:24 compute-0 sudo[144891]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:25 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090006900 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:25 compute-0 sudo[145046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svbatfonwucfybkyytipjczezydkppol ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063625.0832195-221-86284398803850/AnsiballZ_edpm_nftables_snippet.py'
Nov 25 09:40:25 compute-0 sudo[145046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:25.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:25 compute-0 python3[145048]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 25 09:40:25 compute-0 sudo[145046]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:25 compute-0 sudo[145200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhpoiwpqsxtyznxpogusunoivympydgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063625.8133786-248-71836267243657/AnsiballZ_file.py'
Nov 25 09:40:25 compute-0 sudo[145200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:26 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:26 compute-0 python3.9[145202]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:26 compute-0 sudo[145200]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:26 compute-0 ceph-mon[74207]: pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:26 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:26 compute-0 sudo[145352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxkgzoghwlbrerubvggrtadscijvvjaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063626.3294897-272-19009729272705/AnsiballZ_stat.py'
Nov 25 09:40:26 compute-0 sudo[145352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:26 compute-0 python3.9[145354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:26 compute-0 sudo[145352]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:26.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:26.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:26.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:26.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:26.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:27 compute-0 sudo[145430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgnomcisozqsuijrvkctnywlnkszowrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063626.3294897-272-19009729272705/AnsiballZ_file.py'
Nov 25 09:40:27 compute-0 sudo[145430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:27 compute-0 python3.9[145432]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:27 compute-0 sudo[145430]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:40:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:40:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:27.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:40:27 compute-0 sudo[145582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sipijjjnbsehzsepwcjbnbccatgiafrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063627.2975717-308-149663962515998/AnsiballZ_stat.py'
Nov 25 09:40:27 compute-0 sudo[145582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:27 compute-0 python3.9[145584]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:27 compute-0 sudo[145582]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:27 compute-0 sudo[145661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdheuxbcraufpuzcngzikmgjaqxpxgjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063627.2975717-308-149663962515998/AnsiballZ_file.py'
Nov 25 09:40:27 compute-0 sudo[145661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:27 compute-0 python3.9[145663]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8up0ycdu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:27 compute-0 sudo[145661]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:28 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090006a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:28 compute-0 ceph-mon[74207]: pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:40:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:28 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090006a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:28 compute-0 sudo[145814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckahtohkcghophgpaxepupbztznedtge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063628.1174269-344-237388649525254/AnsiballZ_stat.py'
Nov 25 09:40:28 compute-0 sudo[145814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:28 compute-0 python3.9[145816]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:28 compute-0 sudo[145814]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:28 compute-0 sudo[145892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aojnpthfoiysnoatjgmwhsmfwzqhmacp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063628.1174269-344-237388649525254/AnsiballZ_file.py'
Nov 25 09:40:28 compute-0 sudo[145892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:28 compute-0 python3.9[145894]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:28.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:28 compute-0 sudo[145892]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:29 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090006a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:29.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:29 compute-0 sudo[146044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwvawsxmmwafsvibjtuwvxncgnanufqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063629.0637844-383-152665336383409/AnsiballZ_command.py'
Nov 25 09:40:29 compute-0 sudo[146044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:29 compute-0 python3.9[146046]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:29 compute-0 sudo[146044]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:40:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:40:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:30 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:30 compute-0 sudo[146199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iawidvguiuxwpuobxscqrcbwiuznxesu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063629.8541408-407-66549434619872/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 09:40:30 compute-0 sudo[146199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:40:30] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:40:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:40:30] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:40:30 compute-0 ceph-mon[74207]: pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:40:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:30 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090006a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:30 compute-0 python3[146201]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 09:40:30 compute-0 sudo[146199]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:30 compute-0 sudo[146351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odhhahheylvlslsfwclnsjzkozuctpjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063630.5096815-431-228241351448821/AnsiballZ_stat.py'
Nov 25 09:40:30 compute-0 sudo[146351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:30.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:30 compute-0 python3.9[146353]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:30 compute-0 sudo[146351]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:31 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090006a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:40:31 compute-0 sudo[146476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oquuzezbginpnuufaffiqqsqowexjpqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063630.5096815-431-228241351448821/AnsiballZ_copy.py'
Nov 25 09:40:31 compute-0 sudo[146476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:31 compute-0 python3.9[146478]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063630.5096815-431-228241351448821/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:31.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:31 compute-0 sudo[146476]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:31 compute-0 sudo[146629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xflvjvcodqrswxaskvnzagxwaxjrhzzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063631.5832474-476-126926761096981/AnsiballZ_stat.py'
Nov 25 09:40:31 compute-0 sudo[146629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:31 compute-0 python3.9[146631]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:31 compute-0 sudo[146629]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:32 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:32 compute-0 sudo[146755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykgswnvjynlmzfeqhovaxfnaczouncdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063631.5832474-476-126926761096981/AnsiballZ_copy.py'
Nov 25 09:40:32 compute-0 sudo[146755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:32 compute-0 ceph-mon[74207]: pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:40:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:32 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:32 compute-0 python3.9[146757]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063631.5832474-476-126926761096981/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:32 compute-0 sudo[146755]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:32.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:32 compute-0 sudo[146907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mctgjwnhjucsapgmgypmclhnyrlpcdoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063632.6631424-521-78425824246384/AnsiballZ_stat.py'
Nov 25 09:40:32 compute-0 sudo[146907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:33 compute-0 python3.9[146909]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:33 compute-0 sudo[146907]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:33 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:33 compute-0 sudo[147032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjyzknunbvlerzspyqrcqydkbqvjalrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063632.6631424-521-78425824246384/AnsiballZ_copy.py'
Nov 25 09:40:33 compute-0 sudo[147032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:33 compute-0 python3.9[147034]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063632.6631424-521-78425824246384/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:33.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:33 compute-0 sudo[147032]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:33 compute-0 sudo[147185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coulljzyiqxzwxdkqbodnoxnmuatwhcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063633.6694567-566-170653676014691/AnsiballZ_stat.py'
Nov 25 09:40:33 compute-0 sudo[147185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:34 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:34 compute-0 python3.9[147187]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:34 compute-0 sudo[147185]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:34 compute-0 sudo[147311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmnqbmyxzrmhqneqnrkyjeonrkowoapi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063633.6694567-566-170653676014691/AnsiballZ_copy.py'
Nov 25 09:40:34 compute-0 sudo[147311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:34 compute-0 ceph-mon[74207]: pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:34 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:34 compute-0 python3.9[147313]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063633.6694567-566-170653676014691/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:34 compute-0 sudo[147311]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:34.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:34 compute-0 sudo[147464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whzttzkchyvodnmqffhhjdtsrfeenxuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063634.625504-611-23628984162065/AnsiballZ_stat.py'
Nov 25 09:40:34 compute-0 sudo[147464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:35 compute-0 python3.9[147466]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:35 compute-0 sudo[147464]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:35 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054007770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:35 compute-0 sudo[147589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igileeouywpjbzjuakamobkomxivqsix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063634.625504-611-23628984162065/AnsiballZ_copy.py'
Nov 25 09:40:35 compute-0 sudo[147589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:35.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:35 compute-0 python3.9[147591]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063634.625504-611-23628984162065/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:35 compute-0 sudo[147589]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:35 compute-0 sudo[147743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcnzugyciybfdxszhqmbdghcdlnwkspu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063635.7881556-656-230322471237226/AnsiballZ_file.py'
Nov 25 09:40:35 compute-0 sudo[147743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:36 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc098002630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:36 compute-0 python3.9[147745]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:36 compute-0 sudo[147743]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:36 compute-0 ceph-mon[74207]: pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:36 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:36 compute-0 sudo[147895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbfvcsjprwencvriljipcgtrvvmbylay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063636.3105054-680-271729866651810/AnsiballZ_command.py'
Nov 25 09:40:36 compute-0 sudo[147895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:36 compute-0 python3.9[147897]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:36 compute-0 sudo[147895]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:36.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:36.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:36.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:36.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:36.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:37 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:37 compute-0 sudo[148050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjvcbriczeqjccbelckytmilsrtiuymc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063636.8430364-704-8075583799793/AnsiballZ_blockinfile.py'
Nov 25 09:40:37 compute-0 sudo[148050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:40:37 compute-0 python3.9[148052]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:37 compute-0 sudo[148050]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:37.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:37 compute-0 sudo[148203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhyjgihbqlbkqqspyjvoijnrdykjness ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063637.559422-731-135167438787541/AnsiballZ_command.py'
Nov 25 09:40:37 compute-0 sudo[148203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094037 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:40:37 compute-0 python3.9[148205]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:37 compute-0 sudo[148203]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:38 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:38 compute-0 sudo[148357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqphbkjuepmrtiqauzmpkebdjwixppuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063638.0760908-755-233070420451715/AnsiballZ_stat.py'
Nov 25 09:40:38 compute-0 sudo[148357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:38 compute-0 ceph-mon[74207]: pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:40:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:38 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840081f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:38 compute-0 python3.9[148359]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:40:38 compute-0 sudo[148357]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:38.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:39 compute-0 sudo[148511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrdfefjqfconpzgubjalgcffxvohphiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063638.5965135-779-159073027560792/AnsiballZ_command.py'
Nov 25 09:40:39 compute-0 sudo[148511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:39 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0980031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:39 compute-0 python3.9[148513]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:40:39 compute-0 sudo[148511]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:39.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:39 compute-0 sudo[148666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gahleluitnjshrwyonhgoqkvcmyghjju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063639.3377905-803-95154814695454/AnsiballZ_file.py'
Nov 25 09:40:39 compute-0 sudo[148666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:39 compute-0 sudo[148667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:39 compute-0 sudo[148667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:39 compute-0 sudo[148667]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:39 compute-0 python3.9[148674]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:39 compute-0 sudo[148666]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:40 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0980031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:40:40] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:40:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:40:40] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:40:40 compute-0 ceph-mon[74207]: pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:40:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:40 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0980031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:40 compute-0 python3.9[148845]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:40:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:40.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:41 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:41.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:41 compute-0 sudo[148996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvyaandlwoswfydczytvogcncexckirk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063641.3395371-923-179788347475263/AnsiballZ_command.py'
Nov 25 09:40:41 compute-0 sudo[148996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:41 compute-0 python3.9[148998]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:41 compute-0 ovs-vsctl[149000]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 25 09:40:41 compute-0 sudo[148996]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:42 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:42 compute-0 sudo[149151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foafggsxgdeqogqyhopeoawmzvbolosr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063642.1053872-950-230095649792823/AnsiballZ_command.py'
Nov 25 09:40:42 compute-0 sudo[149151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:42 compute-0 ceph-mon[74207]: pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:40:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:42 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840086f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:42 compute-0 python3.9[149153]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:42 compute-0 sudo[149151]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:42 compute-0 sudo[149306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnfddrjqwxkotwfvsojruxirgpocquef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063642.591895-974-209872340532993/AnsiballZ_command.py'
Nov 25 09:40:42 compute-0 sudo[149306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:40:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:42.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:40:42 compute-0 python3.9[149308]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:42 compute-0 ovs-vsctl[149309]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 25 09:40:42 compute-0 sudo[149306]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:43 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840086f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:40:43 compute-0 python3.9[149459]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:40:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:43.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:43 compute-0 sudo[149612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztgctjtkrotyqxijuombwtiujdwtmqkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063643.677783-1025-115094621034004/AnsiballZ_file.py'
Nov 25 09:40:43 compute-0 sudo[149612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:44 compute-0 python3.9[149614]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:44 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:44 compute-0 sudo[149612]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:44 compute-0 ceph-mon[74207]: pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:40:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:44 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:44 compute-0 sudo[149765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlwrnvqwmfeuwhwwkxizrhjhylxcfdvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063644.205758-1049-192440147919981/AnsiballZ_stat.py'
Nov 25 09:40:44 compute-0 sudo[149765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:44 compute-0 python3.9[149767]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:44 compute-0 sudo[149765]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:44 compute-0 sudo[149843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmbtgbixqzknnmbhtfzlpdrzkodrldzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063644.205758-1049-192440147919981/AnsiballZ_file.py'
Nov 25 09:40:44 compute-0 sudo[149843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:44.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:44 compute-0 python3.9[149845]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:44 compute-0 sudo[149843]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:40:44
Nov 25 09:40:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:40:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:40:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['images', '.rgw.root', 'vms', 'cephfs.cephfs.meta', '.nfs', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups']
Nov 25 09:40:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:40:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:40:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:40:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:40:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:40:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:40:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:40:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:40:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:40:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:40:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:40:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:40:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:40:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:40:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:40:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:40:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:40:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:40:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:40:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:45 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:40:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:40:45 compute-0 sudo[149995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uadskrpnyzbfssobdqrgbqnnmacxwymi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063645.1727614-1049-225911800334702/AnsiballZ_stat.py'
Nov 25 09:40:45 compute-0 sudo[149995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:45.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:45 compute-0 python3.9[149997]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:45 compute-0 sudo[149995]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:45 compute-0 sudo[150074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnxztpsygvxnqzsrobpzkvhpupvvlxvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063645.1727614-1049-225911800334702/AnsiballZ_file.py'
Nov 25 09:40:45 compute-0 sudo[150074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:45 compute-0 python3.9[150076]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:45 compute-0 sudo[150074]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:46 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:46 compute-0 sudo[150227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miwrhdfledpoowfrsjrwffmnsovspkpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063645.98832-1118-11911578691992/AnsiballZ_file.py'
Nov 25 09:40:46 compute-0 sudo[150227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:46 compute-0 python3.9[150229]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:46 compute-0 sudo[150227]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:46 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:46 compute-0 ceph-mon[74207]: pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:40:46 compute-0 sudo[150379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htdwrnkasxblbaltrgppybxwquitxuje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063646.4829142-1142-78403694041122/AnsiballZ_stat.py'
Nov 25 09:40:46 compute-0 sudo[150379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:46.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:46 compute-0 python3.9[150381]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:46 compute-0 sudo[150379]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:46.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:46.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:46.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:46.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:47 compute-0 sudo[150457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogtpmzpyhkfizeurutmsdgautejffqia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063646.4829142-1142-78403694041122/AnsiballZ_file.py'
Nov 25 09:40:47 compute-0 sudo[150457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:47 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840086f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:40:47 compute-0 python3.9[150459]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:47 compute-0 sudo[150457]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:47 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:40:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:47.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:47 compute-0 sudo[150609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lphigkhlcgtymneckxrzabsbnvbtsvqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063647.3811383-1178-52188916935844/AnsiballZ_stat.py'
Nov 25 09:40:47 compute-0 sudo[150609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:47 compute-0 python3.9[150611]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:47 compute-0 sudo[150609]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.747529) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647747548, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2235, "num_deletes": 251, "total_data_size": 4260902, "memory_usage": 4317352, "flush_reason": "Manual Compaction"}
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647753271, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2610477, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10620, "largest_seqno": 12854, "table_properties": {"data_size": 2602695, "index_size": 4276, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20252, "raw_average_key_size": 21, "raw_value_size": 2585438, "raw_average_value_size": 2698, "num_data_blocks": 187, "num_entries": 958, "num_filter_entries": 958, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063447, "oldest_key_time": 1764063447, "file_creation_time": 1764063647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 5769 microseconds, and 4198 cpu microseconds.
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.753295) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2610477 bytes OK
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.753311) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.754538) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.754549) EVENT_LOG_v1 {"time_micros": 1764063647754546, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.754557) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4251594, prev total WAL file size 4251594, number of live WAL files 2.
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.755672) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2549KB)], [26(13MB)]
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647755692, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16471699, "oldest_snapshot_seqno": -1}
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4451 keys, 14453609 bytes, temperature: kUnknown
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647788878, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14453609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14419208, "index_size": 22195, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 111713, "raw_average_key_size": 25, "raw_value_size": 14333427, "raw_average_value_size": 3220, "num_data_blocks": 957, "num_entries": 4451, "num_filter_entries": 4451, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764063647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.789102) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14453609 bytes
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.789509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 495.1 rd, 434.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 13.2 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(11.8) write-amplify(5.5) OK, records in: 4877, records dropped: 426 output_compression: NoCompression
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.789533) EVENT_LOG_v1 {"time_micros": 1764063647789516, "job": 10, "event": "compaction_finished", "compaction_time_micros": 33269, "compaction_time_cpu_micros": 21708, "output_level": 6, "num_output_files": 1, "total_output_size": 14453609, "num_input_records": 4877, "num_output_records": 4451, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647789990, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647792016, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.755634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.792038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.792041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.792042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.792043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:47 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:47.792044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:47 compute-0 sudo[150688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xduoqpodxuemqpdrumzvozjourzgcqbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063647.3811383-1178-52188916935844/AnsiballZ_file.py'
Nov 25 09:40:47 compute-0 sudo[150688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:48 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:48 compute-0 python3.9[150691]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:48 compute-0 sudo[150688]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:48 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:48 compute-0 ceph-mon[74207]: pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:40:48 compute-0 sudo[150841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwpjnhpatfpoxpxcbthusckevzdhnvwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063648.3792942-1214-268694686037485/AnsiballZ_systemd.py'
Nov 25 09:40:48 compute-0 sudo[150841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:48 compute-0 python3.9[150843]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:40:48 compute-0 systemd[1]: Reloading.
Nov 25 09:40:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:48.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:48 compute-0 systemd-sysv-generator[150872]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:40:48 compute-0 systemd-rc-local-generator[150868]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:40:49 compute-0 sudo[150841]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:49 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:40:49 compute-0 sudo[151030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsokgifnddwvlmqeapqiblttqslxdopk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063649.213049-1238-200517936183837/AnsiballZ_stat.py'
Nov 25 09:40:49 compute-0 sudo[151030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:49.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:49 compute-0 python3.9[151032]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:49 compute-0 sudo[151030]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:49 compute-0 sudo[151109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfloigstuyjkcprvksmepcaztlwwyljv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063649.213049-1238-200517936183837/AnsiballZ_file.py'
Nov 25 09:40:49 compute-0 sudo[151109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:49 compute-0 python3.9[151111]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:49 compute-0 sudo[151109]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:50 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc084007dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:40:50] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 25 09:40:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:40:50] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 25 09:40:50 compute-0 sudo[151262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwulfmglbobsbckfkqpqssonlggkoywc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063650.0600698-1274-120265457970001/AnsiballZ_stat.py'
Nov 25 09:40:50 compute-0 sudo[151262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:50 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:40:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:50 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:40:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:50 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:50 compute-0 ceph-mon[74207]: pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:40:50 compute-0 python3.9[151264]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:50 compute-0 sudo[151262]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:50 compute-0 sudo[151340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbtfateetzvqnpdtxrrbvcoijjtelfjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063650.0600698-1274-120265457970001/AnsiballZ_file.py'
Nov 25 09:40:50 compute-0 sudo[151340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:50 compute-0 python3.9[151342]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:50 compute-0 sudo[151340]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:50.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:51 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:40:51 compute-0 sudo[151492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnqlocudcqaolbxkpfoxdkbgxggxajio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063650.9293253-1310-251328031259459/AnsiballZ_systemd.py'
Nov 25 09:40:51 compute-0 sudo[151492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.374405) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651374437, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 284, "num_deletes": 251, "total_data_size": 80912, "memory_usage": 86760, "flush_reason": "Manual Compaction"}
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651375304, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 80758, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12855, "largest_seqno": 13138, "table_properties": {"data_size": 78848, "index_size": 138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4668, "raw_average_key_size": 17, "raw_value_size": 75164, "raw_average_value_size": 279, "num_data_blocks": 6, "num_entries": 269, "num_filter_entries": 269, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063648, "oldest_key_time": 1764063648, "file_creation_time": 1764063651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 927 microseconds, and 582 cpu microseconds.
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.375336) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 80758 bytes OK
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.375349) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.375679) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.375690) EVENT_LOG_v1 {"time_micros": 1764063651375687, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.375703) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 78793, prev total WAL file size 78793, number of live WAL files 2.
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(78KB)], [29(13MB)]
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651377199, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 14534367, "oldest_snapshot_seqno": -1}
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4211 keys, 11708672 bytes, temperature: kUnknown
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651404786, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 11708672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11677357, "index_size": 19685, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 107664, "raw_average_key_size": 25, "raw_value_size": 11597232, "raw_average_value_size": 2754, "num_data_blocks": 840, "num_entries": 4211, "num_filter_entries": 4211, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764063651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.405089) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11708672 bytes
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.405541) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 524.5 rd, 422.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.8 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(325.0) write-amplify(145.0) OK, records in: 4720, records dropped: 509 output_compression: NoCompression
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.405560) EVENT_LOG_v1 {"time_micros": 1764063651405549, "job": 12, "event": "compaction_finished", "compaction_time_micros": 27712, "compaction_time_cpu_micros": 17191, "output_level": 6, "num_output_files": 1, "total_output_size": 11708672, "num_input_records": 4720, "num_output_records": 4211, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651405945, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651407698, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.407779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.407782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.407784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.407785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:51 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:40:51.407786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:40:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:51.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:51 compute-0 python3.9[151494]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:40:51 compute-0 systemd[1]: Reloading.
Nov 25 09:40:51 compute-0 systemd-sysv-generator[151518]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:40:51 compute-0 systemd-rc-local-generator[151515]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:40:51 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 09:40:51 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 09:40:51 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 09:40:51 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 09:40:51 compute-0 sudo[151492]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:52 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:52 compute-0 sudo[151686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvaudbnmrdnweesjmxijundjmnzxicda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063652.1071699-1340-245387836301436/AnsiballZ_file.py'
Nov 25 09:40:52 compute-0 sudo[151686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:52 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc084007dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:52 compute-0 ceph-mon[74207]: pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:40:52 compute-0 python3.9[151688]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:52 compute-0 sudo[151689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:40:52 compute-0 sudo[151686]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-0 sudo[151689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:52 compute-0 sudo[151689]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-0 sudo[151714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:40:52 compute-0 sudo[151714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:52 compute-0 sudo[151903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xftpzrdfmwybqnkvqxowjnsujscbmoqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063652.597458-1364-200993441685595/AnsiballZ_stat.py'
Nov 25 09:40:52 compute-0 sudo[151903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:52.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:52 compute-0 sudo[151714]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-0 python3.9[151905]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:52 compute-0 sudo[151903]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:40:52 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:40:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:40:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:40:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:40:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:40:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:40:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:40:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:40:52 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:40:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:40:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:40:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:40:52 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:40:53 compute-0 sudo[151932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:40:53 compute-0 sudo[151932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:53 compute-0 sudo[151932]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:53 compute-0 sudo[151980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:40:53 compute-0 sudo[151980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:53 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc084007dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:53 compute-0 sudo[152090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkgtzvkknjlphpfzcbtmniumwyedlkqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063652.597458-1364-200993441685595/AnsiballZ_copy.py'
Nov 25 09:40:53 compute-0 sudo[152090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:40:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:53 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:40:53 compute-0 python3.9[152092]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063652.597458-1364-200993441685595/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:53 compute-0 podman[152124]: 2025-11-25 09:40:53.368735402 +0000 UTC m=+0.028261686 container create fef1229e964aebc6796bfe1e723f23205792f59b1520c2999fe6af4443de9d64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_franklin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:40:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:40:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:40:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:40:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:40:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:40:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:40:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:40:53 compute-0 sudo[152090]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:53 compute-0 systemd[1]: Started libpod-conmon-fef1229e964aebc6796bfe1e723f23205792f59b1520c2999fe6af4443de9d64.scope.
Nov 25 09:40:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:40:53 compute-0 podman[152124]: 2025-11-25 09:40:53.425485118 +0000 UTC m=+0.085011393 container init fef1229e964aebc6796bfe1e723f23205792f59b1520c2999fe6af4443de9d64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_franklin, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:40:53 compute-0 podman[152124]: 2025-11-25 09:40:53.43075517 +0000 UTC m=+0.090281444 container start fef1229e964aebc6796bfe1e723f23205792f59b1520c2999fe6af4443de9d64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:40:53 compute-0 podman[152124]: 2025-11-25 09:40:53.432408627 +0000 UTC m=+0.091934901 container attach fef1229e964aebc6796bfe1e723f23205792f59b1520c2999fe6af4443de9d64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 09:40:53 compute-0 competent_franklin[152137]: 167 167
Nov 25 09:40:53 compute-0 systemd[1]: libpod-fef1229e964aebc6796bfe1e723f23205792f59b1520c2999fe6af4443de9d64.scope: Deactivated successfully.
Nov 25 09:40:53 compute-0 podman[152124]: 2025-11-25 09:40:53.43455334 +0000 UTC m=+0.094079613 container died fef1229e964aebc6796bfe1e723f23205792f59b1520c2999fe6af4443de9d64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_franklin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:40:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef23909484e2ef9736cce422b385ebbf20b04a5da9a1eb29dd0b32e7a23ef43a-merged.mount: Deactivated successfully.
Nov 25 09:40:53 compute-0 podman[152124]: 2025-11-25 09:40:53.35582486 +0000 UTC m=+0.015351144 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:40:53 compute-0 podman[152124]: 2025-11-25 09:40:53.455129289 +0000 UTC m=+0.114655563 container remove fef1229e964aebc6796bfe1e723f23205792f59b1520c2999fe6af4443de9d64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:40:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:53.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:53 compute-0 systemd[1]: libpod-conmon-fef1229e964aebc6796bfe1e723f23205792f59b1520c2999fe6af4443de9d64.scope: Deactivated successfully.
Nov 25 09:40:53 compute-0 podman[152182]: 2025-11-25 09:40:53.567424871 +0000 UTC m=+0.029824781 container create 560340970d44bf356aaf5a364a7174dfdb63d60e6f833433145830c364bf42e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:40:53 compute-0 systemd[1]: Started libpod-conmon-560340970d44bf356aaf5a364a7174dfdb63d60e6f833433145830c364bf42e6.scope.
Nov 25 09:40:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:40:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f6613318bfd024f95bc33bb2d20d879b56c1c8d069a8321b215af688eec993/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f6613318bfd024f95bc33bb2d20d879b56c1c8d069a8321b215af688eec993/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f6613318bfd024f95bc33bb2d20d879b56c1c8d069a8321b215af688eec993/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f6613318bfd024f95bc33bb2d20d879b56c1c8d069a8321b215af688eec993/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f6613318bfd024f95bc33bb2d20d879b56c1c8d069a8321b215af688eec993/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:53 compute-0 podman[152182]: 2025-11-25 09:40:53.616049445 +0000 UTC m=+0.078449374 container init 560340970d44bf356aaf5a364a7174dfdb63d60e6f833433145830c364bf42e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 09:40:53 compute-0 podman[152182]: 2025-11-25 09:40:53.621754117 +0000 UTC m=+0.084154025 container start 560340970d44bf356aaf5a364a7174dfdb63d60e6f833433145830c364bf42e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:40:53 compute-0 podman[152182]: 2025-11-25 09:40:53.623154005 +0000 UTC m=+0.085553914 container attach 560340970d44bf356aaf5a364a7174dfdb63d60e6f833433145830c364bf42e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 09:40:53 compute-0 podman[152182]: 2025-11-25 09:40:53.555251889 +0000 UTC m=+0.017651829 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:40:53 compute-0 epic_banzai[152195]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:40:53 compute-0 epic_banzai[152195]: --> All data devices are unavailable
Nov 25 09:40:53 compute-0 systemd[1]: libpod-560340970d44bf356aaf5a364a7174dfdb63d60e6f833433145830c364bf42e6.scope: Deactivated successfully.
Nov 25 09:40:53 compute-0 conmon[152195]: conmon 560340970d44bf356aaf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-560340970d44bf356aaf5a364a7174dfdb63d60e6f833433145830c364bf42e6.scope/container/memory.events
Nov 25 09:40:53 compute-0 podman[152182]: 2025-11-25 09:40:53.893233253 +0000 UTC m=+0.355633161 container died 560340970d44bf356aaf5a364a7174dfdb63d60e6f833433145830c364bf42e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_banzai, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:40:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-25f6613318bfd024f95bc33bb2d20d879b56c1c8d069a8321b215af688eec993-merged.mount: Deactivated successfully.
Nov 25 09:40:53 compute-0 podman[152182]: 2025-11-25 09:40:53.920413681 +0000 UTC m=+0.382813589 container remove 560340970d44bf356aaf5a364a7174dfdb63d60e6f833433145830c364bf42e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:40:53 compute-0 systemd[1]: libpod-conmon-560340970d44bf356aaf5a364a7174dfdb63d60e6f833433145830c364bf42e6.scope: Deactivated successfully.
Nov 25 09:40:53 compute-0 sudo[151980]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:53 compute-0 sudo[152349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtwjdcdpgniszvgaihwtgmhulmaardaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063653.7575638-1415-200457878237816/AnsiballZ_file.py'
Nov 25 09:40:53 compute-0 sudo[152349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:53 compute-0 sudo[152350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:40:53 compute-0 sudo[152350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:53 compute-0 sudo[152350]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:54 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc084007dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:54 compute-0 sudo[152377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:40:54 compute-0 sudo[152377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:54 compute-0 python3.9[152358]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:40:54 compute-0 sudo[152349]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:54 compute-0 podman[152462]: 2025-11-25 09:40:54.317653627 +0000 UTC m=+0.026088830 container create f53a032823529d4f2371db6f1e3f9f4704bd5038a6f4ca2ff180d8ffa5a614cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:40:54 compute-0 systemd[1]: Started libpod-conmon-f53a032823529d4f2371db6f1e3f9f4704bd5038a6f4ca2ff180d8ffa5a614cd.scope.
Nov 25 09:40:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:54 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:40:54 compute-0 podman[152462]: 2025-11-25 09:40:54.363908234 +0000 UTC m=+0.072343456 container init f53a032823529d4f2371db6f1e3f9f4704bd5038a6f4ca2ff180d8ffa5a614cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:40:54 compute-0 podman[152462]: 2025-11-25 09:40:54.368422501 +0000 UTC m=+0.076857704 container start f53a032823529d4f2371db6f1e3f9f4704bd5038a6f4ca2ff180d8ffa5a614cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_kilby, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:40:54 compute-0 podman[152462]: 2025-11-25 09:40:54.370011265 +0000 UTC m=+0.078446468 container attach f53a032823529d4f2371db6f1e3f9f4704bd5038a6f4ca2ff180d8ffa5a614cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:40:54 compute-0 stoic_kilby[152512]: 167 167
Nov 25 09:40:54 compute-0 systemd[1]: libpod-f53a032823529d4f2371db6f1e3f9f4704bd5038a6f4ca2ff180d8ffa5a614cd.scope: Deactivated successfully.
Nov 25 09:40:54 compute-0 podman[152462]: 2025-11-25 09:40:54.372418683 +0000 UTC m=+0.080853936 container died f53a032823529d4f2371db6f1e3f9f4704bd5038a6f4ca2ff180d8ffa5a614cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_kilby, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:40:54 compute-0 ceph-mon[74207]: pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:40:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-623cc62b5ffa6239d663a3c221b829e74d8893bcc414232d491356a2925dbeaa-merged.mount: Deactivated successfully.
Nov 25 09:40:54 compute-0 podman[152462]: 2025-11-25 09:40:54.392592545 +0000 UTC m=+0.101027748 container remove f53a032823529d4f2371db6f1e3f9f4704bd5038a6f4ca2ff180d8ffa5a614cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 09:40:54 compute-0 podman[152462]: 2025-11-25 09:40:54.306732444 +0000 UTC m=+0.015167668 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:40:54 compute-0 systemd[1]: libpod-conmon-f53a032823529d4f2371db6f1e3f9f4704bd5038a6f4ca2ff180d8ffa5a614cd.scope: Deactivated successfully.
Nov 25 09:40:54 compute-0 podman[152592]: 2025-11-25 09:40:54.510202474 +0000 UTC m=+0.027029663 container create c3e813885e5f9c5501e811afcdedb737caf863833dc8216ba78cd09e3946d294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:40:54 compute-0 systemd[1]: Started libpod-conmon-c3e813885e5f9c5501e811afcdedb737caf863833dc8216ba78cd09e3946d294.scope.
Nov 25 09:40:54 compute-0 sudo[152629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vygyvfxyinvpbboezhwrqlniwgrahnkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063654.312917-1439-231760301252196/AnsiballZ_stat.py'
Nov 25 09:40:54 compute-0 sudo[152629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:40:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b3a53d89348646ed5aac55ff130fc06bef4d699e029e2a4d20c13775a3c1ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b3a53d89348646ed5aac55ff130fc06bef4d699e029e2a4d20c13775a3c1ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b3a53d89348646ed5aac55ff130fc06bef4d699e029e2a4d20c13775a3c1ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b3a53d89348646ed5aac55ff130fc06bef4d699e029e2a4d20c13775a3c1ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:54 compute-0 podman[152592]: 2025-11-25 09:40:54.563210568 +0000 UTC m=+0.080037777 container init c3e813885e5f9c5501e811afcdedb737caf863833dc8216ba78cd09e3946d294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:40:54 compute-0 podman[152592]: 2025-11-25 09:40:54.571411896 +0000 UTC m=+0.088239084 container start c3e813885e5f9c5501e811afcdedb737caf863833dc8216ba78cd09e3946d294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Nov 25 09:40:54 compute-0 podman[152592]: 2025-11-25 09:40:54.573026247 +0000 UTC m=+0.089853436 container attach c3e813885e5f9c5501e811afcdedb737caf863833dc8216ba78cd09e3946d294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_joliot, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:40:54 compute-0 podman[152592]: 2025-11-25 09:40:54.498655772 +0000 UTC m=+0.015482981 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:40:54 compute-0 python3.9[152635]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:40:54 compute-0 sudo[152629]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:54 compute-0 awesome_joliot[152633]: {
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:     "1": [
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:         {
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             "devices": [
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "/dev/loop3"
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             ],
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             "lv_name": "ceph_lv0",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             "lv_size": "21470642176",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             "name": "ceph_lv0",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             "tags": {
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.cluster_name": "ceph",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.crush_device_class": "",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.encrypted": "0",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.osd_id": "1",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.type": "block",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.vdo": "0",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:                 "ceph.with_tpm": "0"
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             },
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             "type": "block",
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:             "vg_name": "ceph_vg0"
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:         }
Nov 25 09:40:54 compute-0 awesome_joliot[152633]:     ]
Nov 25 09:40:54 compute-0 awesome_joliot[152633]: }
Nov 25 09:40:54 compute-0 systemd[1]: libpod-c3e813885e5f9c5501e811afcdedb737caf863833dc8216ba78cd09e3946d294.scope: Deactivated successfully.
Nov 25 09:40:54 compute-0 podman[152592]: 2025-11-25 09:40:54.802863873 +0000 UTC m=+0.319691061 container died c3e813885e5f9c5501e811afcdedb737caf863833dc8216ba78cd09e3946d294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:40:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2b3a53d89348646ed5aac55ff130fc06bef4d699e029e2a4d20c13775a3c1ae-merged.mount: Deactivated successfully.
Nov 25 09:40:54 compute-0 podman[152592]: 2025-11-25 09:40:54.824562157 +0000 UTC m=+0.341389346 container remove c3e813885e5f9c5501e811afcdedb737caf863833dc8216ba78cd09e3946d294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:40:54 compute-0 systemd[1]: libpod-conmon-c3e813885e5f9c5501e811afcdedb737caf863833dc8216ba78cd09e3946d294.scope: Deactivated successfully.
Nov 25 09:40:54 compute-0 sudo[152377]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:54.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:54 compute-0 sudo[152723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:40:54 compute-0 sudo[152723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:54 compute-0 sudo[152723]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:54 compute-0 sudo[152773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:40:54 compute-0 sudo[152773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:54 compute-0 sudo[152821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xasczhbyzgyimnxkhgqoimazjabfpmlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063654.312917-1439-231760301252196/AnsiballZ_copy.py'
Nov 25 09:40:54 compute-0 sudo[152821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:55 compute-0 python3.9[152825]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063654.312917-1439-231760301252196/.source.json _original_basename=.e64gqu1y follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:55 compute-0 sudo[152821]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:55 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:40:55 compute-0 podman[152882]: 2025-11-25 09:40:55.246464957 +0000 UTC m=+0.032717774 container create 5b8db7c5e99ad068f71a613254857251ce3298b02e762db3ffcf429e7ae0351d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldstine, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 09:40:55 compute-0 systemd[1]: Started libpod-conmon-5b8db7c5e99ad068f71a613254857251ce3298b02e762db3ffcf429e7ae0351d.scope.
Nov 25 09:40:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:40:55 compute-0 podman[152882]: 2025-11-25 09:40:55.295047591 +0000 UTC m=+0.081300408 container init 5b8db7c5e99ad068f71a613254857251ce3298b02e762db3ffcf429e7ae0351d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:40:55 compute-0 podman[152882]: 2025-11-25 09:40:55.299947666 +0000 UTC m=+0.086200483 container start 5b8db7c5e99ad068f71a613254857251ce3298b02e762db3ffcf429e7ae0351d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:40:55 compute-0 unruffled_goldstine[152926]: 167 167
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:55 compute-0 podman[152882]: 2025-11-25 09:40:55.303352974 +0000 UTC m=+0.089605812 container attach 5b8db7c5e99ad068f71a613254857251ce3298b02e762db3ffcf429e7ae0351d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:40:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:40:55 compute-0 systemd[1]: libpod-5b8db7c5e99ad068f71a613254857251ce3298b02e762db3ffcf429e7ae0351d.scope: Deactivated successfully.
Nov 25 09:40:55 compute-0 podman[152882]: 2025-11-25 09:40:55.30433797 +0000 UTC m=+0.090590788 container died 5b8db7c5e99ad068f71a613254857251ce3298b02e762db3ffcf429e7ae0351d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldstine, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:40:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-295f15e775376d863ea090b05fb20905d1dd790add58a18a02db979cc606069e-merged.mount: Deactivated successfully.
Nov 25 09:40:55 compute-0 podman[152882]: 2025-11-25 09:40:55.321106553 +0000 UTC m=+0.107359371 container remove 5b8db7c5e99ad068f71a613254857251ce3298b02e762db3ffcf429e7ae0351d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:40:55 compute-0 podman[152882]: 2025-11-25 09:40:55.233123121 +0000 UTC m=+0.019375958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:40:55 compute-0 systemd[1]: libpod-conmon-5b8db7c5e99ad068f71a613254857251ce3298b02e762db3ffcf429e7ae0351d.scope: Deactivated successfully.
Nov 25 09:40:55 compute-0 podman[153015]: 2025-11-25 09:40:55.437673187 +0000 UTC m=+0.029315642 container create 67f5f290cd241a93f72b9ecb5b5fae9d729958118300172c99a5ec6d3ddfa1e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hofstadter, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:40:55 compute-0 systemd[1]: Started libpod-conmon-67f5f290cd241a93f72b9ecb5b5fae9d729958118300172c99a5ec6d3ddfa1e1.scope.
Nov 25 09:40:55 compute-0 sudo[153051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfvuusjtvhyjogpjkzffwvehvmwidchs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063655.2531323-1484-7724675342744/AnsiballZ_file.py'
Nov 25 09:40:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:55.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:55 compute-0 sudo[153051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28b9b968cc8a0787b40ea7f87fed6594d1d3cd540cacb3de8f1f297772b1d86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28b9b968cc8a0787b40ea7f87fed6594d1d3cd540cacb3de8f1f297772b1d86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28b9b968cc8a0787b40ea7f87fed6594d1d3cd540cacb3de8f1f297772b1d86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28b9b968cc8a0787b40ea7f87fed6594d1d3cd540cacb3de8f1f297772b1d86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:40:55 compute-0 podman[153015]: 2025-11-25 09:40:55.503556207 +0000 UTC m=+0.095198672 container init 67f5f290cd241a93f72b9ecb5b5fae9d729958118300172c99a5ec6d3ddfa1e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hofstadter, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:40:55 compute-0 podman[153015]: 2025-11-25 09:40:55.508160625 +0000 UTC m=+0.099803080 container start 67f5f290cd241a93f72b9ecb5b5fae9d729958118300172c99a5ec6d3ddfa1e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:40:55 compute-0 podman[153015]: 2025-11-25 09:40:55.509925501 +0000 UTC m=+0.101567956 container attach 67f5f290cd241a93f72b9ecb5b5fae9d729958118300172c99a5ec6d3ddfa1e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:40:55 compute-0 podman[153015]: 2025-11-25 09:40:55.424748799 +0000 UTC m=+0.016391273 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:40:55 compute-0 python3.9[153059]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:55 compute-0 sudo[153051]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:55 compute-0 vigilant_hofstadter[153056]: {}
Nov 25 09:40:55 compute-0 lvm[153235]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:40:55 compute-0 lvm[153235]: VG ceph_vg0 finished
Nov 25 09:40:55 compute-0 systemd[1]: libpod-67f5f290cd241a93f72b9ecb5b5fae9d729958118300172c99a5ec6d3ddfa1e1.scope: Deactivated successfully.
Nov 25 09:40:55 compute-0 podman[153015]: 2025-11-25 09:40:55.960287628 +0000 UTC m=+0.551930083 container died 67f5f290cd241a93f72b9ecb5b5fae9d729958118300172c99a5ec6d3ddfa1e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:40:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b28b9b968cc8a0787b40ea7f87fed6594d1d3cd540cacb3de8f1f297772b1d86-merged.mount: Deactivated successfully.
Nov 25 09:40:55 compute-0 podman[153015]: 2025-11-25 09:40:55.985510516 +0000 UTC m=+0.577152971 container remove 67f5f290cd241a93f72b9ecb5b5fae9d729958118300172c99a5ec6d3ddfa1e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:40:55 compute-0 systemd[1]: libpod-conmon-67f5f290cd241a93f72b9ecb5b5fae9d729958118300172c99a5ec6d3ddfa1e1.scope: Deactivated successfully.
Nov 25 09:40:56 compute-0 sudo[152773]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:40:56 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:40:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:40:56 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:40:56 compute-0 sudo[153295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvfjptmifaqtwghwfolgrxdxpcfgyvbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063655.80691-1508-19012104590035/AnsiballZ_stat.py'
Nov 25 09:40:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:56 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:56 compute-0 sudo[153295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:56 compute-0 sudo[153297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:40:56 compute-0 sudo[153297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:56 compute-0 sudo[153297]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:56 compute-0 sudo[153295]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:56 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840086f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:56 compute-0 ceph-mon[74207]: pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:40:56 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:40:56 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:40:56 compute-0 sudo[153443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpcvavvqhlunyxotuofnbrjvpsfqnkwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063655.80691-1508-19012104590035/AnsiballZ_copy.py'
Nov 25 09:40:56 compute-0 sudo[153443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:56 compute-0 sudo[153443]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:40:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:56.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:40:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:56.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:56.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:56.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:40:56.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:40:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:57 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:57 compute-0 sudo[153595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgdxlbncqhtzbwmglwwnenqrghykbrji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063656.8987958-1559-15900573094939/AnsiballZ_container_config_data.py'
Nov 25 09:40:57 compute-0 sudo[153595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:40:57 compute-0 python3.9[153597]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 25 09:40:57 compute-0 sudo[153595]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:57.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:40:57 compute-0 sudo[153748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhrzlcwmbujydjhscpqnccgodicywpir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063657.5590575-1586-185131121988106/AnsiballZ_container_config_hash.py'
Nov 25 09:40:57 compute-0 sudo[153748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:58 compute-0 python3.9[153750]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 09:40:58 compute-0 sudo[153748]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:58 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:58 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:58 compute-0 ceph-mon[74207]: pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:40:58 compute-0 sudo[153901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bppgahojluhojegmsafjdqoogtwkgryn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063658.2684999-1613-179816895698538/AnsiballZ_podman_container_info.py'
Nov 25 09:40:58 compute-0 sudo[153901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:58 compute-0 python3.9[153903]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 09:40:58 compute-0 sudo[153901]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:58.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:40:59 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840086f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:40:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:40:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:40:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:40:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:59.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:40:59 compute-0 sudo[153947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:40:59 compute-0 sudo[153947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:40:59 compute-0 sudo[153947]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094059 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:40:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:40:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:00 compute-0 sudo[154099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olgdinowzjskihfkemebmzrspoehqlur ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063659.6616578-1652-50859496173036/AnsiballZ_edpm_container_manage.py'
Nov 25 09:41:00 compute-0 sudo[154099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:00 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:41:00] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:41:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:41:00] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:41:00 compute-0 python3[154101]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 09:41:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:00 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:00 compute-0 ceph-mon[74207]: pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:41:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:00.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:01 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:41:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:01.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:02 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840086f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:02 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:02 compute-0 ceph-mon[74207]: pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:41:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 09:41:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:02.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 09:41:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:03 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:03.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:04 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:04 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0840086f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:04 compute-0 ceph-mon[74207]: pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:04.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:05 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:05.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:06 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:06 compute-0 podman[154111]: 2025-11-25 09:41:06.205637416 +0000 UTC m=+5.920169928 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 09:41:06 compute-0 podman[154217]: 2025-11-25 09:41:06.300963803 +0000 UTC m=+0.030093443 container create bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 09:41:06 compute-0 podman[154217]: 2025-11-25 09:41:06.287148601 +0000 UTC m=+0.016278231 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 09:41:06 compute-0 python3[154101]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 09:41:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:06 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:06 compute-0 sudo[154099]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:06 compute-0 ceph-mon[74207]: pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:06 compute-0 sudo[154394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcxfyfpbtaqezvtyrfqrmsnqtjjbqnkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063666.520362-1676-233508239443960/AnsiballZ_stat.py'
Nov 25 09:41:06 compute-0 sudo[154394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:06 compute-0 python3.9[154396]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:41:06 compute-0 sudo[154394]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:06.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:06.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:06.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:06.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:07 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40026e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:07 compute-0 sudo[154548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msoosbumqqrmdyxiomlggbmesdriugth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063667.1204052-1703-98028745957423/AnsiballZ_file.py'
Nov 25 09:41:07 compute-0 sudo[154548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:07 compute-0 python3.9[154550]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:07 compute-0 sudo[154548]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:07.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:07 compute-0 sudo[154624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fauqggockpshcvqxzyyxwgrghqzanibk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063667.1204052-1703-98028745957423/AnsiballZ_stat.py'
Nov 25 09:41:07 compute-0 sudo[154624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:07 compute-0 python3.9[154626]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:41:07 compute-0 sudo[154624]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:08 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:08 compute-0 sudo[154777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfyfesgzuxyohvheoeohuegkbggxsqbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063667.8288333-1703-266952682466599/AnsiballZ_copy.py'
Nov 25 09:41:08 compute-0 sudo[154777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:08 compute-0 python3.9[154779]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063667.8288333-1703-266952682466599/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:08 compute-0 sudo[154777]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:08 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:08 compute-0 ceph-mon[74207]: pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:08 compute-0 sudo[154853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qurdiuqfnjtegtbptprdrfpykbuwkdcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063667.8288333-1703-266952682466599/AnsiballZ_systemd.py'
Nov 25 09:41:08 compute-0 sudo[154853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:08 compute-0 python3.9[154855]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:41:08 compute-0 systemd[1]: Reloading.
Nov 25 09:41:08 compute-0 systemd-rc-local-generator[154875]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:41:08 compute-0 systemd-sysv-generator[154879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:41:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:08.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:08 compute-0 sudo[154853]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:09 compute-0 sudo[154963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubljbskpobjanawkcccptvvgtxwoaksb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063667.8288333-1703-266952682466599/AnsiballZ_systemd.py'
Nov 25 09:41:09 compute-0 sudo[154963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:09 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:09 compute-0 python3.9[154965]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:41:09 compute-0 systemd[1]: Reloading.
Nov 25 09:41:09 compute-0 systemd-rc-local-generator[154988]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:41:09 compute-0 systemd-sysv-generator[154991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:41:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:09.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:09 compute-0 systemd[1]: Starting ovn_controller container...
Nov 25 09:41:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c67f864d505a5a3ac8800e96dd3b53407d5460f8582fa8a23b805370d527a9cf/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 09:41:09 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39.
Nov 25 09:41:09 compute-0 podman[155006]: 2025-11-25 09:41:09.71767779 +0000 UTC m=+0.081449067 container init bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 09:41:09 compute-0 ovn_controller[155020]: + sudo -E kolla_set_configs
Nov 25 09:41:09 compute-0 podman[155006]: 2025-11-25 09:41:09.73806105 +0000 UTC m=+0.101832317 container start bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 09:41:09 compute-0 edpm-start-podman-container[155006]: ovn_controller
Nov 25 09:41:09 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 25 09:41:09 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 25 09:41:09 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 25 09:41:09 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 25 09:41:09 compute-0 systemd[155046]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 25 09:41:09 compute-0 edpm-start-podman-container[155005]: Creating additional drop-in dependency for "ovn_controller" (bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39)
Nov 25 09:41:09 compute-0 systemd[1]: Reloading.
Nov 25 09:41:09 compute-0 podman[155027]: 2025-11-25 09:41:09.840512271 +0000 UTC m=+0.094668218 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:41:09 compute-0 systemd[155046]: Queued start job for default target Main User Target.
Nov 25 09:41:09 compute-0 systemd[155046]: Created slice User Application Slice.
Nov 25 09:41:09 compute-0 systemd[155046]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 25 09:41:09 compute-0 systemd[155046]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 09:41:09 compute-0 systemd[155046]: Reached target Paths.
Nov 25 09:41:09 compute-0 systemd[155046]: Reached target Timers.
Nov 25 09:41:09 compute-0 systemd[155046]: Starting D-Bus User Message Bus Socket...
Nov 25 09:41:09 compute-0 systemd[155046]: Starting Create User's Volatile Files and Directories...
Nov 25 09:41:09 compute-0 systemd[155046]: Listening on D-Bus User Message Bus Socket.
Nov 25 09:41:09 compute-0 systemd[155046]: Reached target Sockets.
Nov 25 09:41:09 compute-0 systemd[155046]: Finished Create User's Volatile Files and Directories.
Nov 25 09:41:09 compute-0 systemd[155046]: Reached target Basic System.
Nov 25 09:41:09 compute-0 systemd[155046]: Reached target Main User Target.
Nov 25 09:41:09 compute-0 systemd[155046]: Startup finished in 111ms.
Nov 25 09:41:09 compute-0 systemd-sysv-generator[155106]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:41:09 compute-0 systemd-rc-local-generator[155103]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:41:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:10 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40026e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:10 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 25 09:41:10 compute-0 systemd[1]: Started ovn_controller container.
Nov 25 09:41:10 compute-0 systemd[1]: bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39-3b7541126f99b04a.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 09:41:10 compute-0 systemd[1]: bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39-3b7541126f99b04a.service: Failed with result 'exit-code'.
Nov 25 09:41:10 compute-0 systemd[1]: Started Session c1 of User root.
Nov 25 09:41:10 compute-0 sudo[154963]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:10 compute-0 ovn_controller[155020]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 09:41:10 compute-0 ovn_controller[155020]: INFO:__main__:Validating config file
Nov 25 09:41:10 compute-0 ovn_controller[155020]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 09:41:10 compute-0 ovn_controller[155020]: INFO:__main__:Writing out command to execute
Nov 25 09:41:10 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 25 09:41:10 compute-0 ovn_controller[155020]: ++ cat /run_command
Nov 25 09:41:10 compute-0 ovn_controller[155020]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 09:41:10 compute-0 ovn_controller[155020]: + ARGS=
Nov 25 09:41:10 compute-0 ovn_controller[155020]: + sudo kolla_copy_cacerts
Nov 25 09:41:10 compute-0 systemd[1]: Started Session c2 of User root.
Nov 25 09:41:10 compute-0 ovn_controller[155020]: + [[ ! -n '' ]]
Nov 25 09:41:10 compute-0 ovn_controller[155020]: + . kolla_extend_start
Nov 25 09:41:10 compute-0 ovn_controller[155020]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 09:41:10 compute-0 ovn_controller[155020]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 25 09:41:10 compute-0 ovn_controller[155020]: + umask 0022
Nov 25 09:41:10 compute-0 ovn_controller[155020]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 25 09:41:10 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 25 09:41:10 compute-0 NetworkManager[48903]: <info>  [1764063670.1762] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 25 09:41:10 compute-0 NetworkManager[48903]: <info>  [1764063670.1767] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:41:10 compute-0 NetworkManager[48903]: <info>  [1764063670.1775] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 25 09:41:10 compute-0 NetworkManager[48903]: <info>  [1764063670.1779] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 25 09:41:10 compute-0 NetworkManager[48903]: <info>  [1764063670.1781] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 09:41:10 compute-0 kernel: br-int: entered promiscuous mode
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00024|main|INFO|OVS feature set changed, force recompute.
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 09:41:10 compute-0 ovn_controller[155020]: 2025-11-25T09:41:10Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 09:41:10 compute-0 NetworkManager[48903]: <info>  [1764063670.1899] manager: (ovn-2c2076-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 25 09:41:10 compute-0 NetworkManager[48903]: <info>  [1764063670.1907] manager: (ovn-ad0cdb-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Nov 25 09:41:10 compute-0 NetworkManager[48903]: <info>  [1764063670.1915] manager: (ovn-f116e4-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 25 09:41:10 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 25 09:41:10 compute-0 systemd-udevd[155152]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:41:10 compute-0 NetworkManager[48903]: <info>  [1764063670.2023] device (genev_sys_6081): carrier: link connected
Nov 25 09:41:10 compute-0 NetworkManager[48903]: <info>  [1764063670.2025] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Nov 25 09:41:10 compute-0 systemd-udevd[155157]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:41:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:41:10] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:41:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:41:10] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:41:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:10 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:10 compute-0 sudo[155281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oufeeiwgsvwraxvfougxidcfivpfwuso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063670.2338564-1787-171849147472462/AnsiballZ_command.py'
Nov 25 09:41:10 compute-0 sudo[155281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:10 compute-0 ceph-mon[74207]: pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:10 compute-0 python3.9[155283]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:41:10 compute-0 ovs-vsctl[155284]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 25 09:41:10 compute-0 sudo[155281]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:10.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:10 compute-0 sudo[155434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nugrwvfccpohxshruczorgntxvlkybni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063670.7234511-1811-220747097475095/AnsiballZ_command.py'
Nov 25 09:41:10 compute-0 sudo[155434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:11 compute-0 python3.9[155436]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:41:11 compute-0 ovs-vsctl[155438]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 25 09:41:11 compute-0 sudo[155434]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:11 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:11.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:11 compute-0 sudo[155590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzsfiusptjisjjzvdhzwbxiqwbflnrll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063671.5113168-1853-39643118901158/AnsiballZ_command.py'
Nov 25 09:41:11 compute-0 sudo[155590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:11 compute-0 python3.9[155592]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:41:11 compute-0 ovs-vsctl[155593]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 25 09:41:11 compute-0 sudo[155590]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:12 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:12 compute-0 sshd-session[143707]: Connection closed by 192.168.122.30 port 56440
Nov 25 09:41:12 compute-0 sshd-session[143704]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:41:12 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 25 09:41:12 compute-0 systemd[1]: session-50.scope: Consumed 40.971s CPU time.
Nov 25 09:41:12 compute-0 systemd-logind[744]: Session 50 logged out. Waiting for processes to exit.
Nov 25 09:41:12 compute-0 systemd-logind[744]: Removed session 50.
Nov 25 09:41:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:12 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:12 compute-0 ceph-mon[74207]: pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:12.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:13 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40050d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 09:41:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:13.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 09:41:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:14 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:14 compute-0 ceph-mon[74207]: pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:14.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:41:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:41:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:41:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:41:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:41:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:41:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:41:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:15 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc09c05d430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:15.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:16 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40050d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:16 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40050d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:16 compute-0 ceph-mon[74207]: pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:16.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:16.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:16.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:16.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:16.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:17 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40050d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:41:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:17.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:41:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2809 writes, 13K keys, 2809 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2809 writes, 2809 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2809 writes, 13K keys, 2809 commit groups, 1.0 writes per commit group, ingest: 24.73 MB, 0.04 MB/s
                                           Interval WAL: 2809 writes, 2809 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    421.1      0.05              0.03         6    0.008       0      0       0.0       0.0
                                             L6      1/0   11.17 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.9    492.7    423.9      0.14              0.09         5    0.028     19K   2290       0.0       0.0
                                            Sum      1/0   11.17 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    364.7    423.2      0.19              0.12        11    0.017     19K   2290       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    367.8    426.6      0.19              0.12        10    0.019     19K   2290       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    492.7    423.9      0.14              0.09         5    0.028     19K   2290       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    434.5      0.05              0.03         5    0.010       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     27.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.020, interval 0.020
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e6ae573350#2 capacity: 304.00 MB usage: 2.84 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(170,2.64 MB,0.869726%) FilterBlock(12,64.23 KB,0.0206345%) IndexBlock(12,131.75 KB,0.0423231%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 09:41:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:17 compute-0 sshd-session[155625]: Accepted publickey for zuul from 192.168.122.30 port 42822 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:41:17 compute-0 systemd-logind[744]: New session 52 of user zuul.
Nov 25 09:41:17 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 25 09:41:17 compute-0 sshd-session[155625]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:41:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:18 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007900 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:18 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:18 compute-0 ceph-mon[74207]: pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:41:18 compute-0 python3.9[155779]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:41:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 09:41:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:18.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 09:41:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:19 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:19.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:19 compute-0 sudo[155933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfrgvsrmfokcaqhnntmmhehkumwktgwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063679.2048287-62-7441017805402/AnsiballZ_file.py'
Nov 25 09:41:19 compute-0 sudo[155933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:19 compute-0 sudo[155936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:19 compute-0 sudo[155936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:19 compute-0 sudo[155936]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:19 compute-0 python3.9[155935]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:19 compute-0 sudo[155933]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:19 compute-0 sudo[156112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwuoepwfxyevpmkfzcxztxddksyuzrmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063679.8084157-62-65707911597353/AnsiballZ_file.py'
Nov 25 09:41:19 compute-0 sudo[156112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:20 compute-0 python3.9[156114]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:20 compute-0 sudo[156112]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:20 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 25 09:41:20 compute-0 systemd[155046]: Activating special unit Exit the Session...
Nov 25 09:41:20 compute-0 systemd[155046]: Stopped target Main User Target.
Nov 25 09:41:20 compute-0 systemd[155046]: Stopped target Basic System.
Nov 25 09:41:20 compute-0 systemd[155046]: Stopped target Paths.
Nov 25 09:41:20 compute-0 systemd[155046]: Stopped target Sockets.
Nov 25 09:41:20 compute-0 systemd[155046]: Stopped target Timers.
Nov 25 09:41:20 compute-0 systemd[155046]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 09:41:20 compute-0 systemd[155046]: Closed D-Bus User Message Bus Socket.
Nov 25 09:41:20 compute-0 systemd[155046]: Stopped Create User's Volatile Files and Directories.
Nov 25 09:41:20 compute-0 systemd[155046]: Removed slice User Application Slice.
Nov 25 09:41:20 compute-0 systemd[155046]: Reached target Shutdown.
Nov 25 09:41:20 compute-0 systemd[155046]: Finished Exit the Session.
Nov 25 09:41:20 compute-0 systemd[155046]: Reached target Exit the Session.
Nov 25 09:41:20 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 25 09:41:20 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 25 09:41:20 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 25 09:41:20 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 25 09:41:20 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 25 09:41:20 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 25 09:41:20 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 25 09:41:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:41:20] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Nov 25 09:41:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:41:20] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Nov 25 09:41:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:20 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40050d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:20 compute-0 sudo[156266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-judtcnrnotxhizrfjweevprbsreuuzoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063680.2604096-62-20055739290568/AnsiballZ_file.py'
Nov 25 09:41:20 compute-0 sudo[156266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:20 compute-0 ceph-mon[74207]: pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:20 compute-0 python3.9[156268]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:20 compute-0 sudo[156266]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:20 compute-0 sudo[156418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdogslxddvhcncfqvmtdfbqxjodltefz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063680.7120814-62-3418662653638/AnsiballZ_file.py'
Nov 25 09:41:20 compute-0 sudo[156418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:21 compute-0 python3.9[156420]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:21 compute-0 sudo[156418]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:21 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc09c04b3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:41:21 compute-0 sudo[156570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdeeipeqsmszhgqfjdwtosnpahjlfdoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063681.1534483-62-263009996486507/AnsiballZ_file.py'
Nov 25 09:41:21 compute-0 sudo[156570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:21 compute-0 python3.9[156572]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:21.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:21 compute-0 sudo[156570]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:22 compute-0 python3.9[156723]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:41:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:22 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:22 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:22 compute-0 sudo[156874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyymqlavaqukrzuzgppsevqvflbzgfjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063682.203685-194-75720478904002/AnsiballZ_seboolean.py'
Nov 25 09:41:22 compute-0 sudo[156874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:22 compute-0 ceph-mon[74207]: pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:41:22 compute-0 python3.9[156876]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 09:41:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:22.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:23 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:23 compute-0 sudo[156874]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:23.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:23 compute-0 python3.9[157029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094123 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:41:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:24 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:24 compute-0 python3.9[157152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063683.3422844-218-3710681720732/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:24 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40050d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:24 compute-0 ceph-mon[74207]: pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:24 compute-0 python3.9[157302]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:24.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:25 compute-0 python3.9[157423]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063684.453846-263-8460203322740/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:25 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:25.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:25 compute-0 sudo[157573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmswtoewcdhfrtfmbtaqiunaexaekiwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063685.4391353-314-250906748819170/AnsiballZ_setup.py'
Nov 25 09:41:25 compute-0 sudo[157573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:25 compute-0 python3.9[157575]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:41:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:26 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:26 compute-0 sudo[157573]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:26 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc09c04b3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:26 compute-0 sudo[157659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnyxvdlwxjlwjsnlogrjuwqoggwyholu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063685.4391353-314-250906748819170/AnsiballZ_dnf.py'
Nov 25 09:41:26 compute-0 sudo[157659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:26 compute-0 python3.9[157661]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:41:26 compute-0 ceph-mon[74207]: pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:26.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:26.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:26.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:26.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:26.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:27 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc09c04b3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:41:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:27.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:27 compute-0 sudo[157659]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:28 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:28 compute-0 sudo[157814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kskqfcpdlwthdenelqioasxjcaxxkvlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063687.708733-350-63909716380459/AnsiballZ_systemd.py'
Nov 25 09:41:28 compute-0 sudo[157814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:28 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc09c04b3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:28 compute-0 python3.9[157816]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:41:28 compute-0 sudo[157814]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:28 compute-0 ceph-mon[74207]: pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:41:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:28.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:29 compute-0 python3.9[157971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:29 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:29 compute-0 python3.9[158092]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063688.6309927-374-47292975489416/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:29.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:29 compute-0 python3.9[158242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:41:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:30 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40213e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:30 compute-0 python3.9[158365]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063689.4874644-374-96314332705833/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:41:30] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 25 09:41:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:41:30] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 25 09:41:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:30 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0900079a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:30 compute-0 ceph-mon[74207]: pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:30.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:31 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc09c0293e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:31 compute-0 python3.9[158515]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:31.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:31 compute-0 python3.9[158636]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063691.0629878-506-254935364817353/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:32 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0600089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:32 compute-0 python3.9[158788]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:32 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40213e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:32 compute-0 python3.9[158909]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063691.867792-506-79381252973261/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:32 compute-0 ceph-mon[74207]: pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:32 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:41:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:32.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:33 compute-0 python3.9[159059]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:41:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:33 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0900079a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:33 compute-0 sudo[159211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmscjfbhzffdqnbcdxlltejsqqfdyydp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063693.248201-620-10527450794103/AnsiballZ_file.py'
Nov 25 09:41:33 compute-0 sudo[159211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:33.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:33 compute-0 python3.9[159213]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:33 compute-0 sudo[159211]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:33 compute-0 sudo[159365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqxfkpxbvobikgndutinqviwiadtvbpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063693.7386007-644-116192566791065/AnsiballZ_stat.py'
Nov 25 09:41:33 compute-0 sudo[159365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:34 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc09c0293e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:34 compute-0 python3.9[159367]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:34 compute-0 sudo[159365]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:34 compute-0 sudo[159443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssfjoecazhuildqlcgwkbhwuukjdppkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063693.7386007-644-116192566791065/AnsiballZ_file.py'
Nov 25 09:41:34 compute-0 sudo[159443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:34 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc06000a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:34 compute-0 python3.9[159445]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:34 compute-0 sudo[159443]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:34 compute-0 ceph-mon[74207]: pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:34 compute-0 sudo[159595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcszrajsiciyzdpsnzkarrratspmdyuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063694.5129213-644-110185131320973/AnsiballZ_stat.py'
Nov 25 09:41:34 compute-0 sudo[159595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:34 compute-0 python3.9[159597]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:34 compute-0 sudo[159595]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:34.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:34 compute-0 sudo[159673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jogqpbubzdzcacjahravuxxhqyijpsqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063694.5129213-644-110185131320973/AnsiballZ_file.py'
Nov 25 09:41:34 compute-0 sudo[159673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:35 compute-0 python3.9[159675]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:35 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40213e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:35 compute-0 sudo[159673]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:35 compute-0 sudo[159825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgdufwhurcjupjzzraphakwvpczeclcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063695.3143811-713-107688825734233/AnsiballZ_file.py'
Nov 25 09:41:35 compute-0 sudo[159825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:35.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:35 compute-0 python3.9[159827]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:35 compute-0 sudo[159825]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:35 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:41:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:35 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:41:35 compute-0 sudo[159979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zysrrsxoweofngvseiozaqmmxzcfunrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063695.7899141-737-85911784767244/AnsiballZ_stat.py'
Nov 25 09:41:35 compute-0 sudo[159979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:36 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc090007b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:36 compute-0 python3.9[159981]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:36 compute-0 sudo[159979]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:36 compute-0 sudo[160057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqxfyimqgiaohkxndlrybsziwvlwtwrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063695.7899141-737-85911784767244/AnsiballZ_file.py'
Nov 25 09:41:36 compute-0 sudo[160057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:36 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc09c0293e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:36 compute-0 python3.9[160059]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:36 compute-0 sudo[160057]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:36 compute-0 ceph-mon[74207]: pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:36 compute-0 sudo[160209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnrussqdfsvznhfehpdfhzfekctauusu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063696.6074407-773-195343494767874/AnsiballZ_stat.py'
Nov 25 09:41:36 compute-0 sudo[160209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:36.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:36 compute-0 python3.9[160211]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:36 compute-0 sudo[160209]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:36.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:36.977Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:36.977Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:36.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:37 compute-0 sudo[160287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfmzuquawxqqzfbfulbqxxuvevhcjghu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063696.6074407-773-195343494767874/AnsiballZ_file.py'
Nov 25 09:41:37 compute-0 sudo[160287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:37 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc06000a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:41:37 compute-0 python3.9[160289]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:37 compute-0 sudo[160287]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:37.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:37 compute-0 sudo[160440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pngplwqquvystqtsakixioierwjbuukr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063697.432381-809-152838131655261/AnsiballZ_systemd.py'
Nov 25 09:41:37 compute-0 sudo[160440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:37 compute-0 python3.9[160442]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:41:37 compute-0 systemd[1]: Reloading.
Nov 25 09:41:37 compute-0 systemd-sysv-generator[160468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:41:37 compute-0 systemd-rc-local-generator[160465]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:41:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:38 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40213e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:38 compute-0 sudo[160440]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:38 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0ac0a6860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:38 compute-0 sudo[160632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbvzdeloowvqemyfcaprhiaxsoezrocv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063698.3063345-833-37263454321439/AnsiballZ_stat.py'
Nov 25 09:41:38 compute-0 sudo[160632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:38 compute-0 python3.9[160634]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:38 compute-0 ceph-mon[74207]: pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:41:38 compute-0 sudo[160632]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:38 : epoch 69257906 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:41:38 compute-0 sudo[160710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckxhqoeovlqdpynxirxtjwqoeernnxjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063698.3063345-833-37263454321439/AnsiballZ_file.py'
Nov 25 09:41:38 compute-0 sudo[160710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:38.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:38 compute-0 python3.9[160712]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:39 compute-0 sudo[160710]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:39 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc09c0293e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:41:39 compute-0 sudo[160862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqqxnsjqnaiwboexlybldknvgtwazusz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063699.2259617-869-113824154965752/AnsiballZ_stat.py'
Nov 25 09:41:39 compute-0 sudo[160862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:39.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:39 compute-0 python3.9[160864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:39 compute-0 sudo[160862]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:39 compute-0 sudo[160908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:39 compute-0 sudo[160908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:39 compute-0 sudo[160908]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:39 compute-0 sudo[160966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlcfsxsmuqektvkwtxpuvhbpzujhvqau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063699.2259617-869-113824154965752/AnsiballZ_file.py'
Nov 25 09:41:39 compute-0 sudo[160966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:39 compute-0 python3.9[160968]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:39 compute-0 sudo[160966]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:40 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc06000a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:40 compute-0 sudo[161127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmsjqdzmxcxtkvflwflsyepquqkomrsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063700.0257547-905-97707973702286/AnsiballZ_systemd.py'
Nov 25 09:41:40 compute-0 sudo[161127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:40 compute-0 ovn_controller[155020]: 2025-11-25T09:41:40Z|00025|memory|INFO|16000 kB peak resident set size after 30.1 seconds
Nov 25 09:41:40 compute-0 ovn_controller[155020]: 2025-11-25T09:41:40Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Nov 25 09:41:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:41:40] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 25 09:41:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:41:40] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 25 09:41:40 compute-0 podman[161093]: 2025-11-25 09:41:40.253439363 +0000 UTC m=+0.066824783 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 09:41:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:40 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40213e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:40 compute-0 python3.9[161135]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:41:40 compute-0 systemd[1]: Reloading.
Nov 25 09:41:40 compute-0 systemd-rc-local-generator[161165]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:41:40 compute-0 systemd-sysv-generator[161168]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:41:40 compute-0 ceph-mon[74207]: pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:41:40 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 09:41:40 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 09:41:40 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 09:41:40 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 09:41:40 compute-0 sudo[161127]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:40.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:41 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0ac0a6860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:41:41 compute-0 sudo[161334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbmpffbstpevzxqjjgurfawnnqtxvhdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063701.063125-935-19921947219677/AnsiballZ_file.py'
Nov 25 09:41:41 compute-0 sudo[161334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:41 compute-0 python3.9[161336]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:41 compute-0 sudo[161334]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:41.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:41 compute-0 sudo[161487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrhxgrgednmrrulugsoxsephphxgcfxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063701.5686696-959-111748397684525/AnsiballZ_stat.py'
Nov 25 09:41:41 compute-0 sudo[161487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:41 compute-0 python3.9[161489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:41 compute-0 sudo[161487]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:42 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc09c0293e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:42 compute-0 sudo[161611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfrxujytjndkexshlzikrfprrjgrbljk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063701.5686696-959-111748397684525/AnsiballZ_copy.py'
Nov 25 09:41:42 compute-0 sudo[161611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:42 compute-0 python3.9[161613]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063701.5686696-959-111748397684525/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:42 compute-0 sudo[161611]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:42 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc06000a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:42 compute-0 ceph-mon[74207]: pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:41:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:42 compute-0 sudo[161763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgqjebjcvweybrorsndhgmdzoihqcslq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063702.682827-1010-21359562623768/AnsiballZ_file.py'
Nov 25 09:41:42 compute-0 sudo[161763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:42.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:43 compute-0 python3.9[161765]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:41:43 compute-0 sudo[161763]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:43 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40213e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:41:43 compute-0 sudo[161915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujniwkmvtrnizrwownteublogwyhtoii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063703.2092206-1034-196949621023124/AnsiballZ_stat.py'
Nov 25 09:41:43 compute-0 sudo[161915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:43.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:43 compute-0 python3.9[161917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:41:43 compute-0 sudo[161915]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:43 compute-0 sudo[162039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbktiaidbboasyvifanurhmhijawmrms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063703.2092206-1034-196949621023124/AnsiballZ_copy.py'
Nov 25 09:41:43 compute-0 sudo[162039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094143 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:41:43 compute-0 python3.9[162041]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063703.2092206-1034-196949621023124/.source.json _original_basename=.sqtiyjk2 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:43 compute-0 sudo[162039]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:44 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0ac0a6860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:44 compute-0 sudo[162192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anlzorwzxudjpzeetqzplxyxqyjtdfrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063704.1155033-1079-27337118734974/AnsiballZ_file.py'
Nov 25 09:41:44 compute-0 sudo[162192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:44 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc09c0293e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:44 compute-0 python3.9[162194]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:44 compute-0 sudo[162192]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:44 compute-0 ceph-mon[74207]: pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:41:44 compute-0 sudo[162344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anpcbtgoccezqnkuvjfazyortyqadgsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063704.6364896-1103-78190291737723/AnsiballZ_stat.py'
Nov 25 09:41:44 compute-0 sudo[162344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:41:44
Nov 25 09:41:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:41:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:41:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['.rgw.root', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', 'default.rgw.log', '.mgr']
Nov 25 09:41:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:41:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:44.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:41:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:44 compute-0 sudo[162344]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:41:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:41:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:41:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:41:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:41:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:41:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:41:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:41:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:41:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:41:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:41:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:41:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:41:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:41:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:41:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:41:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:45 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc06000a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:45 compute-0 sudo[162468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfuywxfzybkdwjuqsbfunimxowbhtfcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063704.6364896-1103-78190291737723/AnsiballZ_copy.py'
Nov 25 09:41:45 compute-0 sudo[162468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:41:45 compute-0 sudo[162468]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:45.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:46 compute-0 sudo[162622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwadqwiusfjlitmpybxenjzyfiybtwkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063705.69954-1154-193800499019744/AnsiballZ_container_config_data.py'
Nov 25 09:41:46 compute-0 sudo[162622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:46 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40213e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:46 compute-0 python3.9[162624]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 25 09:41:46 compute-0 sudo[162622]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:46 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40213e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:46 compute-0 ceph-mon[74207]: pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:41:46 compute-0 sudo[162774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wppblxntgbluqjtzicpexiuetkkoiyck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063706.3964639-1181-9809426131126/AnsiballZ_container_config_hash.py'
Nov 25 09:41:46 compute-0 sudo[162774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:46 compute-0 python3.9[162776]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 09:41:46 compute-0 sudo[162774]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0[79443]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 25 09:41:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:46.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:46.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:46.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:46.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:46.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:47 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40213e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:41:47 compute-0 sudo[162926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgyqbpkkvixdicovdieebkygqczeybsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063707.0978553-1208-281413313623933/AnsiballZ_podman_container_info.py'
Nov 25 09:41:47 compute-0 sudo[162926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:47.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:47 compute-0 python3.9[162928]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 09:41:47 compute-0 sudo[162926]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:48 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc06000a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:48 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:48 compute-0 ceph-mon[74207]: pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:41:48 compute-0 sudo[163099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dazvimladozipsykjutpnsjsjzodrsgb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063708.5211508-1247-179720333308011/AnsiballZ_edpm_container_manage.py'
Nov 25 09:41:48 compute-0 sudo[163099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:48.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:49 compute-0 python3[163101]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 09:41:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:49 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc054002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:49.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:50 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc0a40213e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:41:50] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Nov 25 09:41:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:41:50] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Nov 25 09:41:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:50 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc06000a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:41:50 compute-0 ceph-mon[74207]: pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:50.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:51 compute-0 kernel: ganesha.nfsd[160290]: segfault at 50 ip 00007fc10eba232e sp 00007fc0cfffe210 error 4 in libntirpc.so.5.8[7fc10eb87000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 25 09:41:51 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:41:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[123507]: 25/11/2025 09:41:51 : epoch 69257906 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc06000a9e0 fd 48 proxy ignored for local
Nov 25 09:41:51 compute-0 systemd[1]: Started Process Core Dump (PID 163156/UID 0).
Nov 25 09:41:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:51.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:52 compute-0 systemd-coredump[163157]: Process 123511 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 67:
                                                    #0  0x00007fc10eba232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:41:52 compute-0 systemd[1]: systemd-coredump@3-163156-0.service: Deactivated successfully.
Nov 25 09:41:52 compute-0 ceph-mon[74207]: pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:41:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:52.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:53 compute-0 podman[163166]: 2025-11-25 09:41:53.499641533 +0000 UTC m=+1.214021766 container died e02f789246dd6fd2b7da468085fc53bece883451d55b67498fa5acdc6d736601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:41:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:53.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:54 compute-0 ceph-mon[74207]: pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:54.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:41:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:41:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:55.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:55 compute-0 ceph-mon[74207]: pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:56 compute-0 sudo[163196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:41:56 compute-0 sudo[163196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:56 compute-0 sudo[163196]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:56 compute-0 sudo[163221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 25 09:41:56 compute-0 sudo[163221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 09:41:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:56.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 09:41:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:56.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:56.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:56.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:41:56.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:41:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094157 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:41:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:57.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:41:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:41:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:41:58 compute-0 sudo[163221]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:41:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:41:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:58 compute-0 sudo[163293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:41:58 compute-0 sudo[163293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:58 compute-0 sudo[163293]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b63f9444d049f868877f0283043709282975868bae60ce4c5beeb82c2e80a2aa-merged.mount: Deactivated successfully.
Nov 25 09:41:58 compute-0 podman[163166]: 2025-11-25 09:41:58.426387151 +0000 UTC m=+6.140767384 container remove e02f789246dd6fd2b7da468085fc53bece883451d55b67498fa5acdc6d736601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 25 09:41:58 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:41:58 compute-0 sudo[163318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:41:58 compute-0 sudo[163318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:58 compute-0 podman[163112]: 2025-11-25 09:41:58.458432627 +0000 UTC m=+9.259653903 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:41:58 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:41:58 compute-0 podman[163379]: 2025-11-25 09:41:58.559581998 +0000 UTC m=+0.034573538 container create c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:41:58 compute-0 podman[163379]: 2025-11-25 09:41:58.544479562 +0000 UTC m=+0.019471113 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:41:58 compute-0 ceph-mon[74207]: pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:41:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:58 compute-0 python3[163101]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:41:58 compute-0 sudo[163099]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:58 compute-0 sudo[163318]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:41:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:58.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:41:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:41:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:41:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 09:41:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:59.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 09:41:59 compute-0 sudo[163465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:41:59 compute-0 sudo[163465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:59 compute-0 sudo[163465]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:41:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:41:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:41:59 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:41:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:41:59 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:41:59 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:41:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:41:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:41:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:41:59 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:41:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:41:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:41:59 compute-0 sudo[163490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:41:59 compute-0 sudo[163490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:41:59 compute-0 sudo[163490]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:59 compute-0 sudo[163516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:41:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:41:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:41:59 compute-0 sudo[163516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:00 compute-0 podman[163618]: 2025-11-25 09:42:00.23025347 +0000 UTC m=+0.028579942 container create 31c3683de6ac49ae628c9648733ae7f9d02c4108f45b1c707a4ac0d7b04a3204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_euclid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:42:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:42:00] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:42:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:42:00] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:42:00 compute-0 systemd[1]: Started libpod-conmon-31c3683de6ac49ae628c9648733ae7f9d02c4108f45b1c707a4ac0d7b04a3204.scope.
Nov 25 09:42:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:42:00 compute-0 podman[163618]: 2025-11-25 09:42:00.278829863 +0000 UTC m=+0.077156346 container init 31c3683de6ac49ae628c9648733ae7f9d02c4108f45b1c707a4ac0d7b04a3204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:42:00 compute-0 podman[163618]: 2025-11-25 09:42:00.283968218 +0000 UTC m=+0.082294680 container start 31c3683de6ac49ae628c9648733ae7f9d02c4108f45b1c707a4ac0d7b04a3204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_euclid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:42:00 compute-0 podman[163618]: 2025-11-25 09:42:00.285213784 +0000 UTC m=+0.083540256 container attach 31c3683de6ac49ae628c9648733ae7f9d02c4108f45b1c707a4ac0d7b04a3204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 09:42:00 compute-0 magical_euclid[163661]: 167 167
Nov 25 09:42:00 compute-0 systemd[1]: libpod-31c3683de6ac49ae628c9648733ae7f9d02c4108f45b1c707a4ac0d7b04a3204.scope: Deactivated successfully.
Nov 25 09:42:00 compute-0 podman[163618]: 2025-11-25 09:42:00.289293234 +0000 UTC m=+0.087619696 container died 31c3683de6ac49ae628c9648733ae7f9d02c4108f45b1c707a4ac0d7b04a3204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_euclid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:42:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d355a50f269e24f51799c18e11729ca0f8c3a59341a2e82bb4d640cd54418650-merged.mount: Deactivated successfully.
Nov 25 09:42:00 compute-0 podman[163618]: 2025-11-25 09:42:00.219330033 +0000 UTC m=+0.017656515 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:42:00 compute-0 podman[163618]: 2025-11-25 09:42:00.317158911 +0000 UTC m=+0.115485374 container remove 31c3683de6ac49ae628c9648733ae7f9d02c4108f45b1c707a4ac0d7b04a3204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:42:00 compute-0 systemd[1]: libpod-conmon-31c3683de6ac49ae628c9648733ae7f9d02c4108f45b1c707a4ac0d7b04a3204.scope: Deactivated successfully.
Nov 25 09:42:00 compute-0 sudo[163727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgpvjlpdjttowhidizyyedriejqievfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063720.1596444-1271-260432422079465/AnsiballZ_stat.py'
Nov 25 09:42:00 compute-0 sudo[163727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:00 compute-0 podman[163735]: 2025-11-25 09:42:00.438879354 +0000 UTC m=+0.030161771 container create c0eef2f790ff00fbfdc2e219bfccc4d301cb549ea3b7aaf01f8fbe70311107e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:42:00 compute-0 systemd[1]: Started libpod-conmon-c0eef2f790ff00fbfdc2e219bfccc4d301cb549ea3b7aaf01f8fbe70311107e0.scope.
Nov 25 09:42:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72653b2d7b518c269897459202945e32e6920bf0d98c129a13f4287b8c2fdb9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72653b2d7b518c269897459202945e32e6920bf0d98c129a13f4287b8c2fdb9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72653b2d7b518c269897459202945e32e6920bf0d98c129a13f4287b8c2fdb9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72653b2d7b518c269897459202945e32e6920bf0d98c129a13f4287b8c2fdb9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72653b2d7b518c269897459202945e32e6920bf0d98c129a13f4287b8c2fdb9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:00 compute-0 podman[163735]: 2025-11-25 09:42:00.493805803 +0000 UTC m=+0.085088220 container init c0eef2f790ff00fbfdc2e219bfccc4d301cb549ea3b7aaf01f8fbe70311107e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_ishizaka, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:42:00 compute-0 podman[163735]: 2025-11-25 09:42:00.498957232 +0000 UTC m=+0.090239650 container start c0eef2f790ff00fbfdc2e219bfccc4d301cb549ea3b7aaf01f8fbe70311107e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 09:42:00 compute-0 podman[163735]: 2025-11-25 09:42:00.500189994 +0000 UTC m=+0.091472411 container attach c0eef2f790ff00fbfdc2e219bfccc4d301cb549ea3b7aaf01f8fbe70311107e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_ishizaka, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:42:00 compute-0 podman[163735]: 2025-11-25 09:42:00.427197658 +0000 UTC m=+0.018480076 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:42:00 compute-0 python3.9[163729]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:42:00 compute-0 ceph-mon[74207]: pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:42:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:42:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:42:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:42:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:42:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:42:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:42:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:42:00 compute-0 sudo[163727]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:00 compute-0 youthful_ishizaka[163748]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:42:00 compute-0 youthful_ishizaka[163748]: --> All data devices are unavailable
Nov 25 09:42:00 compute-0 systemd[1]: libpod-c0eef2f790ff00fbfdc2e219bfccc4d301cb549ea3b7aaf01f8fbe70311107e0.scope: Deactivated successfully.
Nov 25 09:42:00 compute-0 podman[163735]: 2025-11-25 09:42:00.769397945 +0000 UTC m=+0.360680362 container died c0eef2f790ff00fbfdc2e219bfccc4d301cb549ea3b7aaf01f8fbe70311107e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_ishizaka, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:42:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-72653b2d7b518c269897459202945e32e6920bf0d98c129a13f4287b8c2fdb9e-merged.mount: Deactivated successfully.
Nov 25 09:42:00 compute-0 podman[163735]: 2025-11-25 09:42:00.795859097 +0000 UTC m=+0.387141515 container remove c0eef2f790ff00fbfdc2e219bfccc4d301cb549ea3b7aaf01f8fbe70311107e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 25 09:42:00 compute-0 systemd[1]: libpod-conmon-c0eef2f790ff00fbfdc2e219bfccc4d301cb549ea3b7aaf01f8fbe70311107e0.scope: Deactivated successfully.
Nov 25 09:42:00 compute-0 sudo[163516]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:00 compute-0 sudo[163848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:42:00 compute-0 sudo[163848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:00 compute-0 sudo[163848]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:00 compute-0 sudo[163897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:42:00 compute-0 sudo[163897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:00.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:00 compute-0 sudo[163974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nifhffdotthvclniphrmrqgfupkjyrsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063720.7959154-1298-275648369166666/AnsiballZ_file.py'
Nov 25 09:42:00 compute-0 sudo[163974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:01 compute-0 python3.9[163976]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:01 compute-0 sudo[163974]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:01 compute-0 podman[164009]: 2025-11-25 09:42:01.210075058 +0000 UTC m=+0.030428062 container create be2bf213e7ac8e03db1771751173de1b79d205c5f0ad1d916e379d4657bfe47d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 09:42:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:42:01 compute-0 systemd[1]: Started libpod-conmon-be2bf213e7ac8e03db1771751173de1b79d205c5f0ad1d916e379d4657bfe47d.scope.
Nov 25 09:42:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:42:01 compute-0 podman[164009]: 2025-11-25 09:42:01.2672117 +0000 UTC m=+0.087564714 container init be2bf213e7ac8e03db1771751173de1b79d205c5f0ad1d916e379d4657bfe47d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_carson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:42:01 compute-0 podman[164009]: 2025-11-25 09:42:01.271866955 +0000 UTC m=+0.092219949 container start be2bf213e7ac8e03db1771751173de1b79d205c5f0ad1d916e379d4657bfe47d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_carson, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:42:01 compute-0 podman[164009]: 2025-11-25 09:42:01.273400011 +0000 UTC m=+0.093753005 container attach be2bf213e7ac8e03db1771751173de1b79d205c5f0ad1d916e379d4657bfe47d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:42:01 compute-0 peaceful_carson[164045]: 167 167
Nov 25 09:42:01 compute-0 systemd[1]: libpod-be2bf213e7ac8e03db1771751173de1b79d205c5f0ad1d916e379d4657bfe47d.scope: Deactivated successfully.
Nov 25 09:42:01 compute-0 podman[164009]: 2025-11-25 09:42:01.27526958 +0000 UTC m=+0.095622595 container died be2bf213e7ac8e03db1771751173de1b79d205c5f0ad1d916e379d4657bfe47d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_carson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:42:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-66d0547067b62f12f9d8b133fc2c07827c86e21bd8c38361babb493f822288b6-merged.mount: Deactivated successfully.
Nov 25 09:42:01 compute-0 podman[164009]: 2025-11-25 09:42:01.197975957 +0000 UTC m=+0.018328961 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:42:01 compute-0 podman[164009]: 2025-11-25 09:42:01.297174435 +0000 UTC m=+0.117527429 container remove be2bf213e7ac8e03db1771751173de1b79d205c5f0ad1d916e379d4657bfe47d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_carson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 25 09:42:01 compute-0 systemd[1]: libpod-conmon-be2bf213e7ac8e03db1771751173de1b79d205c5f0ad1d916e379d4657bfe47d.scope: Deactivated successfully.
Nov 25 09:42:01 compute-0 sudo[164112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptbfjrthwoehtsjpcnjehzuuoopcdium ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063720.7959154-1298-275648369166666/AnsiballZ_stat.py'
Nov 25 09:42:01 compute-0 sudo[164112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:01 compute-0 podman[164120]: 2025-11-25 09:42:01.418386169 +0000 UTC m=+0.029052513 container create 6a7ded2ef10b0b521d35212bec537b684d3ef7fcc90127681e50e7467462b5f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ptolemy, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:42:01 compute-0 systemd[1]: Started libpod-conmon-6a7ded2ef10b0b521d35212bec537b684d3ef7fcc90127681e50e7467462b5f8.scope.
Nov 25 09:42:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96119d6399a230573920e4ba22620c7a01c65d87411c85de04dd50de3eaaf4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96119d6399a230573920e4ba22620c7a01c65d87411c85de04dd50de3eaaf4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96119d6399a230573920e4ba22620c7a01c65d87411c85de04dd50de3eaaf4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96119d6399a230573920e4ba22620c7a01c65d87411c85de04dd50de3eaaf4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:01 compute-0 podman[164120]: 2025-11-25 09:42:01.476401404 +0000 UTC m=+0.087067757 container init 6a7ded2ef10b0b521d35212bec537b684d3ef7fcc90127681e50e7467462b5f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ptolemy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:42:01 compute-0 podman[164120]: 2025-11-25 09:42:01.481740156 +0000 UTC m=+0.092406499 container start 6a7ded2ef10b0b521d35212bec537b684d3ef7fcc90127681e50e7467462b5f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:42:01 compute-0 podman[164120]: 2025-11-25 09:42:01.48421649 +0000 UTC m=+0.094882832 container attach 6a7ded2ef10b0b521d35212bec537b684d3ef7fcc90127681e50e7467462b5f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ptolemy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 25 09:42:01 compute-0 python3.9[164114]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:42:01 compute-0 podman[164120]: 2025-11-25 09:42:01.406370045 +0000 UTC m=+0.017036407 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:42:01 compute-0 sudo[164112]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:01.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]: {
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:     "1": [
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:         {
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             "devices": [
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "/dev/loop3"
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             ],
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             "lv_name": "ceph_lv0",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             "lv_size": "21470642176",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             "name": "ceph_lv0",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             "tags": {
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.cluster_name": "ceph",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.crush_device_class": "",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.encrypted": "0",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.osd_id": "1",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.type": "block",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.vdo": "0",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:                 "ceph.with_tpm": "0"
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             },
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             "type": "block",
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:             "vg_name": "ceph_vg0"
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:         }
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]:     ]
Nov 25 09:42:01 compute-0 bold_ptolemy[164133]: }
Nov 25 09:42:01 compute-0 systemd[1]: libpod-6a7ded2ef10b0b521d35212bec537b684d3ef7fcc90127681e50e7467462b5f8.scope: Deactivated successfully.
Nov 25 09:42:01 compute-0 podman[164120]: 2025-11-25 09:42:01.72539277 +0000 UTC m=+0.336059113 container died 6a7ded2ef10b0b521d35212bec537b684d3ef7fcc90127681e50e7467462b5f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ptolemy, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:42:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c96119d6399a230573920e4ba22620c7a01c65d87411c85de04dd50de3eaaf4f-merged.mount: Deactivated successfully.
Nov 25 09:42:01 compute-0 podman[164120]: 2025-11-25 09:42:01.748524624 +0000 UTC m=+0.359190967 container remove 6a7ded2ef10b0b521d35212bec537b684d3ef7fcc90127681e50e7467462b5f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ptolemy, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:42:01 compute-0 systemd[1]: libpod-conmon-6a7ded2ef10b0b521d35212bec537b684d3ef7fcc90127681e50e7467462b5f8.scope: Deactivated successfully.
Nov 25 09:42:01 compute-0 sudo[163897]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:01 compute-0 sudo[164251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:42:01 compute-0 sudo[164251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:01 compute-0 sudo[164251]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:01 compute-0 sudo[164300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:42:01 compute-0 sudo[164300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:01 compute-0 sudo[164349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzbkhoxeqclxasgxnnhnpfgipzcynlac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063721.5557745-1298-270594207519473/AnsiballZ_copy.py'
Nov 25 09:42:01 compute-0 sudo[164349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:02 compute-0 python3.9[164353]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063721.5557745-1298-270594207519473/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:02 compute-0 sudo[164349]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:02 compute-0 podman[164410]: 2025-11-25 09:42:02.151982761 +0000 UTC m=+0.028662128 container create 72b9c2f2492676c950c630bd63f8bab9c8c256ea0bb699e6f08cea690b632ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bhabha, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:42:02 compute-0 systemd[1]: Started libpod-conmon-72b9c2f2492676c950c630bd63f8bab9c8c256ea0bb699e6f08cea690b632ef3.scope.
Nov 25 09:42:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:42:02 compute-0 podman[164410]: 2025-11-25 09:42:02.205339584 +0000 UTC m=+0.082018972 container init 72b9c2f2492676c950c630bd63f8bab9c8c256ea0bb699e6f08cea690b632ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bhabha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:42:02 compute-0 podman[164410]: 2025-11-25 09:42:02.209886265 +0000 UTC m=+0.086565632 container start 72b9c2f2492676c950c630bd63f8bab9c8c256ea0bb699e6f08cea690b632ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:42:02 compute-0 podman[164410]: 2025-11-25 09:42:02.211117963 +0000 UTC m=+0.087797331 container attach 72b9c2f2492676c950c630bd63f8bab9c8c256ea0bb699e6f08cea690b632ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bhabha, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:42:02 compute-0 youthful_bhabha[164446]: 167 167
Nov 25 09:42:02 compute-0 systemd[1]: libpod-72b9c2f2492676c950c630bd63f8bab9c8c256ea0bb699e6f08cea690b632ef3.scope: Deactivated successfully.
Nov 25 09:42:02 compute-0 podman[164410]: 2025-11-25 09:42:02.213847343 +0000 UTC m=+0.090526711 container died 72b9c2f2492676c950c630bd63f8bab9c8c256ea0bb699e6f08cea690b632ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:42:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7172b25db035dea3f7f3c283318c7d32276b65b9e54b0085e5fdd165dee637b-merged.mount: Deactivated successfully.
Nov 25 09:42:02 compute-0 podman[164410]: 2025-11-25 09:42:02.233573454 +0000 UTC m=+0.110252822 container remove 72b9c2f2492676c950c630bd63f8bab9c8c256ea0bb699e6f08cea690b632ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bhabha, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 25 09:42:02 compute-0 podman[164410]: 2025-11-25 09:42:02.140051315 +0000 UTC m=+0.016730703 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:42:02 compute-0 systemd[1]: libpod-conmon-72b9c2f2492676c950c630bd63f8bab9c8c256ea0bb699e6f08cea690b632ef3.scope: Deactivated successfully.
Nov 25 09:42:02 compute-0 sudo[164490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmwbqjooyojeasztfceaclqctexigwyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063721.5557745-1298-270594207519473/AnsiballZ_systemd.py'
Nov 25 09:42:02 compute-0 sudo[164490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:02 compute-0 podman[164498]: 2025-11-25 09:42:02.35683621 +0000 UTC m=+0.031051997 container create 059a942ac4d73ec2253043fe91fb97e795f7f736ec2e45fdf7c57c0f0101a613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 09:42:02 compute-0 systemd[1]: Started libpod-conmon-059a942ac4d73ec2253043fe91fb97e795f7f736ec2e45fdf7c57c0f0101a613.scope.
Nov 25 09:42:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a407b960661c773745d0dedbc7e9df55538e824936a17b4b2cdc021f43ea3a8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a407b960661c773745d0dedbc7e9df55538e824936a17b4b2cdc021f43ea3a8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a407b960661c773745d0dedbc7e9df55538e824936a17b4b2cdc021f43ea3a8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a407b960661c773745d0dedbc7e9df55538e824936a17b4b2cdc021f43ea3a8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:02 compute-0 podman[164498]: 2025-11-25 09:42:02.421696274 +0000 UTC m=+0.095912059 container init 059a942ac4d73ec2253043fe91fb97e795f7f736ec2e45fdf7c57c0f0101a613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jepsen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:42:02 compute-0 podman[164498]: 2025-11-25 09:42:02.42684631 +0000 UTC m=+0.101062097 container start 059a942ac4d73ec2253043fe91fb97e795f7f736ec2e45fdf7c57c0f0101a613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jepsen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:42:02 compute-0 podman[164498]: 2025-11-25 09:42:02.431994694 +0000 UTC m=+0.106210501 container attach 059a942ac4d73ec2253043fe91fb97e795f7f736ec2e45fdf7c57c0f0101a613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:42:02 compute-0 podman[164498]: 2025-11-25 09:42:02.344708586 +0000 UTC m=+0.018924372 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:42:02 compute-0 python3.9[164492]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:42:02 compute-0 systemd[1]: Reloading.
Nov 25 09:42:02 compute-0 ceph-mon[74207]: pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:42:02 compute-0 systemd-sysv-generator[164537]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:42:02 compute-0 systemd-rc-local-generator[164534]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:42:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:02 compute-0 sudo[164490]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:02 compute-0 competent_jepsen[164511]: {}
Nov 25 09:42:02 compute-0 lvm[164670]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:42:02 compute-0 lvm[164670]: VG ceph_vg0 finished
Nov 25 09:42:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:02.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:02 compute-0 systemd[1]: libpod-059a942ac4d73ec2253043fe91fb97e795f7f736ec2e45fdf7c57c0f0101a613.scope: Deactivated successfully.
Nov 25 09:42:02 compute-0 podman[164498]: 2025-11-25 09:42:02.955651567 +0000 UTC m=+0.629867353 container died 059a942ac4d73ec2253043fe91fb97e795f7f736ec2e45fdf7c57c0f0101a613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 09:42:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a407b960661c773745d0dedbc7e9df55538e824936a17b4b2cdc021f43ea3a8b-merged.mount: Deactivated successfully.
Nov 25 09:42:02 compute-0 podman[164498]: 2025-11-25 09:42:02.980573271 +0000 UTC m=+0.654789047 container remove 059a942ac4d73ec2253043fe91fb97e795f7f736ec2e45fdf7c57c0f0101a613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jepsen, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:42:02 compute-0 sudo[164703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvwbgdnjmnbfhhhoselsgpaqxltbtoql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063721.5557745-1298-270594207519473/AnsiballZ_systemd.py'
Nov 25 09:42:02 compute-0 sudo[164703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:02 compute-0 systemd[1]: libpod-conmon-059a942ac4d73ec2253043fe91fb97e795f7f736ec2e45fdf7c57c0f0101a613.scope: Deactivated successfully.
Nov 25 09:42:03 compute-0 sudo[164300]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:42:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:42:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:42:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:42:03 compute-0 sudo[164709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:42:03 compute-0 sudo[164709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:03 compute-0 sudo[164709]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:42:03 compute-0 python3.9[164708]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:03 compute-0 systemd[1]: Reloading.
Nov 25 09:42:03 compute-0 systemd-sysv-generator[164759]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:42:03 compute-0 systemd-rc-local-generator[164756]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:42:03 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 25 09:42:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:03.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33dfd31bb4c7107fb4a7ccaa40d5e4cf716624000372f61696864321b2404bbb/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33dfd31bb4c7107fb4a7ccaa40d5e4cf716624000372f61696864321b2404bbb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:03 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62.
Nov 25 09:42:03 compute-0 podman[164774]: 2025-11-25 09:42:03.611686459 +0000 UTC m=+0.090697191 container init c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: + sudo -E kolla_set_configs
Nov 25 09:42:03 compute-0 podman[164774]: 2025-11-25 09:42:03.631023168 +0000 UTC m=+0.110033880 container start c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 09:42:03 compute-0 edpm-start-podman-container[164774]: ovn_metadata_agent
Nov 25 09:42:03 compute-0 edpm-start-podman-container[164773]: Creating additional drop-in dependency for "ovn_metadata_agent" (c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62)
Nov 25 09:42:03 compute-0 podman[164792]: 2025-11-25 09:42:03.681510979 +0000 UTC m=+0.041680340 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 09:42:03 compute-0 systemd[1]: Reloading.
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Validating config file
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Copying service configuration files
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Writing out command to execute
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: ++ cat /run_command
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: + CMD=neutron-ovn-metadata-agent
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: + ARGS=
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: + sudo kolla_copy_cacerts
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: Running command: 'neutron-ovn-metadata-agent'
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: + [[ ! -n '' ]]
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: + . kolla_extend_start
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: + umask 0022
Nov 25 09:42:03 compute-0 ovn_metadata_agent[164786]: + exec neutron-ovn-metadata-agent
Nov 25 09:42:03 compute-0 systemd-rc-local-generator[164854]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:42:03 compute-0 systemd-sysv-generator[164857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:42:03 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 25 09:42:03 compute-0 sudo[164703]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:42:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:42:04 compute-0 ceph-mon[74207]: pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:42:04 compute-0 sshd-session[155629]: Connection closed by 192.168.122.30 port 42822
Nov 25 09:42:04 compute-0 sshd-session[155625]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:42:04 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Nov 25 09:42:04 compute-0 systemd[1]: session-52.scope: Consumed 39.418s CPU time.
Nov 25 09:42:04 compute-0 systemd-logind[744]: Session 52 logged out. Waiting for processes to exit.
Nov 25 09:42:04 compute-0 systemd-logind[744]: Removed session 52.
Nov 25 09:42:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:04.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.328 164791 INFO neutron.common.config [-] Logging enabled!
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.328 164791 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.329 164791 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.329 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.329 164791 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.329 164791 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.329 164791 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.330 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.330 164791 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.330 164791 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.330 164791 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.330 164791 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.330 164791 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.330 164791 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.330 164791 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.330 164791 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.331 164791 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.331 164791 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.331 164791 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.331 164791 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.331 164791 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.331 164791 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.331 164791 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.331 164791 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.331 164791 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.331 164791 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.332 164791 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.332 164791 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.332 164791 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.332 164791 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.332 164791 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.332 164791 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.332 164791 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.332 164791 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.332 164791 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.333 164791 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.333 164791 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.333 164791 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.333 164791 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.333 164791 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.333 164791 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.333 164791 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.333 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.333 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.334 164791 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.334 164791 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.334 164791 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.334 164791 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.334 164791 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.334 164791 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.334 164791 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.334 164791 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.334 164791 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.334 164791 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.334 164791 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.335 164791 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.335 164791 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.335 164791 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.335 164791 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.335 164791 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.335 164791 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.335 164791 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.335 164791 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.335 164791 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.336 164791 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.336 164791 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.336 164791 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.336 164791 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.336 164791 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.336 164791 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.336 164791 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.336 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.336 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.336 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.337 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.337 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.337 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.337 164791 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.337 164791 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.337 164791 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.337 164791 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.337 164791 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.337 164791 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.337 164791 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.338 164791 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.338 164791 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.338 164791 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.338 164791 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.338 164791 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.338 164791 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.338 164791 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.338 164791 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.338 164791 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.339 164791 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.339 164791 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.339 164791 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.339 164791 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.339 164791 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.339 164791 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.339 164791 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.339 164791 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.339 164791 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.339 164791 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.339 164791 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.340 164791 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.340 164791 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.340 164791 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.340 164791 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.340 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.340 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.340 164791 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.340 164791 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.340 164791 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.341 164791 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.341 164791 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.341 164791 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.341 164791 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.341 164791 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.341 164791 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.341 164791 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.341 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.341 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.341 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.342 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.342 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.342 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.342 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.342 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.342 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.342 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.342 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.342 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.343 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.343 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.343 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.343 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.343 164791 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.343 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.343 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.343 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.343 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.343 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.344 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.344 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.344 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.344 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.344 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.344 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.344 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.344 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.344 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.345 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.345 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.345 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.345 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.345 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.345 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.345 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.345 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.345 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.345 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.346 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.346 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.346 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.346 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.346 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.346 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.346 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.346 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.346 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.346 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.347 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.347 164791 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.347 164791 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.347 164791 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.347 164791 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.347 164791 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.347 164791 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.347 164791 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.347 164791 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.348 164791 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.348 164791 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.348 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.348 164791 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.348 164791 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.348 164791 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.348 164791 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.348 164791 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.348 164791 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.349 164791 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.349 164791 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.349 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.349 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.349 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.349 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.349 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.349 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.349 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.349 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.350 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.350 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.350 164791 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.350 164791 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.350 164791 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.350 164791 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.350 164791 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.350 164791 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.350 164791 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.351 164791 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.351 164791 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.351 164791 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.351 164791 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.351 164791 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.351 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.351 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.351 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.351 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.351 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.352 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.352 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.352 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.352 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.352 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.352 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.352 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.352 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.352 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.353 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.353 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.353 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.353 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.353 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.353 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.353 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.353 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.353 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.354 164791 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.354 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.354 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.354 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.354 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.354 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.354 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.354 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.354 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.355 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.355 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.355 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.355 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.355 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.355 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.355 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.355 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.355 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.355 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.356 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.356 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.356 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.356 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.356 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.356 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.356 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.356 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.357 164791 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.357 164791 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.357 164791 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.357 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.357 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.357 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.357 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.357 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.357 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.357 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.358 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.358 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.358 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.358 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.358 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.358 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.358 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.358 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.358 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.358 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.359 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.359 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.359 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.359 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.359 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.359 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.359 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.359 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.359 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.360 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.360 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.360 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.360 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.360 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.360 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.360 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.360 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.360 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.360 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.361 164791 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.361 164791 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.368 164791 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.368 164791 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.368 164791 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.368 164791 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.369 164791 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.379 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name a23dd616-1012-4f28-8d7d-927fdaae5f69 (UUID: a23dd616-1012-4f28-8d7d-927fdaae5f69) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.396 164791 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.397 164791 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.397 164791 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.397 164791 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.399 164791 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.403 164791 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.407 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'a23dd616-1012-4f28-8d7d-927fdaae5f69'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], external_ids={}, name=a23dd616-1012-4f28-8d7d-927fdaae5f69, nb_cfg_timestamp=1764063678187, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.407 164791 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fbd73b40f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.408 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.408 164791 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.408 164791 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.408 164791 INFO oslo_service.service [-] Starting 1 workers
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.412 164791 DEBUG oslo_service.service [-] Started child 164895 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.414 164791 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpe2ny2j8b/privsep.sock']
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.414 164895 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-457948'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.430 164895 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.430 164895 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.430 164895 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.433 164895 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.438 164895 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.443 164895 INFO eventlet.wsgi.server [-] (164895) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 25 09:42:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:05.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:05 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.938 164791 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.939 164791 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpe2ny2j8b/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.859 164901 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.862 164901 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.864 164901 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.864 164901 INFO oslo.privsep.daemon [-] privsep daemon running as pid 164901
Nov 25 09:42:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:05.941 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[270ba765-4ad4-4144-98b4-b71b1cb7e833]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:42:06 compute-0 ceph-mon[74207]: pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.345 164901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.345 164901 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.345 164901 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.785 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[d87b551a-8475-435c-838d-5d89187d16e3]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.787 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, column=external_ids, values=({'neutron:ovn-metadata-id': '1a57f170-84aa-5abd-8c9b-d24bf96ec0f7'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.793 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.796 164791 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.797 164791 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.797 164791 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.797 164791 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.797 164791 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.797 164791 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.797 164791 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.797 164791 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.797 164791 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.797 164791 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.798 164791 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.798 164791 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.798 164791 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.798 164791 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.798 164791 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.798 164791 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.798 164791 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.798 164791 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.798 164791 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.799 164791 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.799 164791 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.799 164791 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.799 164791 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.799 164791 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.799 164791 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.799 164791 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.799 164791 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.799 164791 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.800 164791 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.800 164791 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.800 164791 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.800 164791 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.800 164791 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.800 164791 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.800 164791 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.800 164791 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.800 164791 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.801 164791 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.801 164791 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.801 164791 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.801 164791 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.801 164791 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.801 164791 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.801 164791 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.801 164791 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.801 164791 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.802 164791 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.802 164791 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.802 164791 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.802 164791 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.802 164791 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.802 164791 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.802 164791 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.802 164791 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.802 164791 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.802 164791 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.803 164791 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.803 164791 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.803 164791 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.803 164791 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.803 164791 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.803 164791 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.803 164791 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.803 164791 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.803 164791 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.803 164791 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.804 164791 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.804 164791 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.804 164791 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.804 164791 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.804 164791 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.804 164791 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.804 164791 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.804 164791 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.804 164791 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.805 164791 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.805 164791 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.805 164791 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.805 164791 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.805 164791 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.805 164791 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.805 164791 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.805 164791 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.805 164791 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.806 164791 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.806 164791 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.806 164791 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.806 164791 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.806 164791 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.806 164791 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.806 164791 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.806 164791 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.806 164791 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.806 164791 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.807 164791 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.807 164791 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.807 164791 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.807 164791 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.807 164791 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.807 164791 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.807 164791 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.807 164791 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.807 164791 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.807 164791 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.808 164791 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.808 164791 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.808 164791 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.808 164791 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.808 164791 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.808 164791 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.808 164791 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.808 164791 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.808 164791 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.809 164791 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.809 164791 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.809 164791 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.809 164791 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.809 164791 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.809 164791 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.809 164791 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.809 164791 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.809 164791 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.809 164791 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.810 164791 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.810 164791 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.810 164791 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.810 164791 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.810 164791 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.810 164791 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.810 164791 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.810 164791 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.810 164791 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.811 164791 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.811 164791 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.811 164791 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.811 164791 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.811 164791 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.811 164791 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.811 164791 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.811 164791 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.811 164791 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.812 164791 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.812 164791 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.812 164791 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.812 164791 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.812 164791 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.812 164791 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.812 164791 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.812 164791 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.812 164791 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.812 164791 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.813 164791 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.813 164791 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.813 164791 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.813 164791 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.813 164791 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.813 164791 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.813 164791 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.813 164791 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.813 164791 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.813 164791 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.813 164791 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.814 164791 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.814 164791 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.814 164791 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.814 164791 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.814 164791 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.814 164791 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.814 164791 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.814 164791 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.814 164791 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.814 164791 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.815 164791 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.815 164791 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.815 164791 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.815 164791 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.815 164791 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.815 164791 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.815 164791 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.815 164791 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.815 164791 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.815 164791 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.816 164791 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.816 164791 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.816 164791 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.816 164791 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.816 164791 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.816 164791 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.816 164791 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.816 164791 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.816 164791 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.817 164791 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.817 164791 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.817 164791 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.817 164791 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.817 164791 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.817 164791 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.817 164791 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.817 164791 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.817 164791 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.817 164791 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.818 164791 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.818 164791 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.818 164791 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.818 164791 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.818 164791 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.818 164791 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.818 164791 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.818 164791 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.818 164791 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.818 164791 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.819 164791 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.819 164791 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.819 164791 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.819 164791 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.819 164791 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.819 164791 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.819 164791 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.819 164791 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.819 164791 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.819 164791 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.820 164791 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.820 164791 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.820 164791 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.820 164791 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.820 164791 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.820 164791 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.820 164791 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.820 164791 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.820 164791 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.820 164791 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.821 164791 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.821 164791 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.821 164791 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.821 164791 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.821 164791 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.821 164791 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.821 164791 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.821 164791 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.821 164791 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.821 164791 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.821 164791 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.822 164791 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.822 164791 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.822 164791 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.822 164791 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.822 164791 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.822 164791 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.822 164791 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.822 164791 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.822 164791 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.823 164791 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.823 164791 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.823 164791 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.823 164791 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.823 164791 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.823 164791 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.823 164791 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.823 164791 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.823 164791 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.823 164791 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.824 164791 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.824 164791 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.824 164791 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.824 164791 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.824 164791 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.824 164791 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.824 164791 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.824 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.824 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.825 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.825 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.825 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.825 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.825 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.825 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.825 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.825 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.825 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.825 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.826 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.826 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.826 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.826 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.826 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.826 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.826 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.826 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.826 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.826 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.827 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.827 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.827 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.827 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.827 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.827 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.827 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.827 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.827 164791 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.828 164791 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.828 164791 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.828 164791 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.828 164791 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:42:06 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:42:06.828 164791 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 09:42:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:06.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:06.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:06.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:06.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:06.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:07.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:08 compute-0 ceph-mon[74207]: pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:08 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 4.
Nov 25 09:42:08 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:42:08 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:42:08 compute-0 podman[164947]: 2025-11-25 09:42:08.820204812 +0000 UTC m=+0.027790391 container create 2e11d8cadc4f0221377ded1dacfd486197db235defde3f188dca2feab655fa47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 09:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d1a5518cb52e3d3fa45065936145ac82534e8879d62d53263e9ef43e4207d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d1a5518cb52e3d3fa45065936145ac82534e8879d62d53263e9ef43e4207d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d1a5518cb52e3d3fa45065936145ac82534e8879d62d53263e9ef43e4207d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d1a5518cb52e3d3fa45065936145ac82534e8879d62d53263e9ef43e4207d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:42:08 compute-0 podman[164947]: 2025-11-25 09:42:08.863119585 +0000 UTC m=+0.070705165 container init 2e11d8cadc4f0221377ded1dacfd486197db235defde3f188dca2feab655fa47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:42:08 compute-0 podman[164947]: 2025-11-25 09:42:08.867626115 +0000 UTC m=+0.075211684 container start 2e11d8cadc4f0221377ded1dacfd486197db235defde3f188dca2feab655fa47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:42:08 compute-0 bash[164947]: 2e11d8cadc4f0221377ded1dacfd486197db235defde3f188dca2feab655fa47
Nov 25 09:42:08 compute-0 podman[164947]: 2025-11-25 09:42:08.809032612 +0000 UTC m=+0.016618191 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:42:08 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:42:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:08 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:42:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:08 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:42:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:08 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:42:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:08 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:42:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:08 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:42:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:08 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:42:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:08.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:08 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:42:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:09 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:42:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:42:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:42:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:09.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:42:10 compute-0 sshd-session[165003]: Accepted publickey for zuul from 192.168.122.30 port 38916 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:42:10 compute-0 systemd-logind[744]: New session 53 of user zuul.
Nov 25 09:42:10 compute-0 systemd[1]: Started Session 53 of User zuul.
Nov 25 09:42:10 compute-0 sshd-session[165003]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:42:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:42:10] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:42:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:42:10] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:42:10 compute-0 ceph-mon[74207]: pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:42:10 compute-0 podman[165130]: 2025-11-25 09:42:10.640393923 +0000 UTC m=+0.059633174 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:42:10 compute-0 python3.9[165169]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:42:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:10.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:11.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:11 compute-0 sudo[165334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlvevxfpkrslqdhodztveykfkqwbngtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063731.3615942-62-82484483546346/AnsiballZ_command.py'
Nov 25 09:42:11 compute-0 sudo[165334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:11 compute-0 python3.9[165336]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:11 compute-0 sudo[165334]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:12 compute-0 ceph-mon[74207]: pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:12 compute-0 sudo[165497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wexjimakceffjbezxrdmnkvgzooddwnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063732.2281451-95-36672287522719/AnsiballZ_systemd_service.py'
Nov 25 09:42:12 compute-0 sudo[165497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:12 compute-0 python3.9[165499]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:42:12 compute-0 systemd[1]: Reloading.
Nov 25 09:42:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:12.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:13 compute-0 systemd-rc-local-generator[165522]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:42:13 compute-0 systemd-sysv-generator[165526]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:42:13 compute-0 sudo[165497]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:13.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:13 compute-0 python3.9[165684]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:42:13 compute-0 network[165702]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:42:13 compute-0 network[165703]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:42:13 compute-0 network[165704]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:42:14 compute-0 ceph-mon[74207]: pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:42:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:42:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:14.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:42:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:42:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:42:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:42:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:42:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:42:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:15 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:42:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:15 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:42:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:42:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:15.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:16 compute-0 ceph-mon[74207]: pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:16.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:16.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:16.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:16.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:16.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:17 compute-0 sudo[165967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeqvbfrbyyltffxlkircdmcyiehyocwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063736.8537533-152-248172097554626/AnsiballZ_systemd_service.py'
Nov 25 09:42:17 compute-0 sudo[165967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:17 compute-0 python3.9[165969]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:17 compute-0 sudo[165967]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:17.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:17 compute-0 sudo[166120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npumbfmwuzrdiodeoakrwltfnarkvetb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063737.4200072-152-258342040065397/AnsiballZ_systemd_service.py'
Nov 25 09:42:17 compute-0 sudo[166120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:17 compute-0 python3.9[166122]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:17 compute-0 sudo[166120]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:18 compute-0 sudo[166275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvxpiuehsuiyujozawtzfjnrbtlybnnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063737.982118-152-3475127909115/AnsiballZ_systemd_service.py'
Nov 25 09:42:18 compute-0 sudo[166275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:18 compute-0 python3.9[166277]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:18 compute-0 ceph-mon[74207]: pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:18 compute-0 sudo[166275]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:18 compute-0 sudo[166428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fttorkmcljojrkhjpjoomgeldijkewkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063738.5356665-152-4111469530051/AnsiballZ_systemd_service.py'
Nov 25 09:42:18 compute-0 sudo[166428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:18 compute-0 python3.9[166430]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:18.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:19 compute-0 sudo[166428]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:19 compute-0 sudo[166581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgbbylyhkrhrkyvshiaolidkdqocmlvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063739.0867867-152-103368331286959/AnsiballZ_systemd_service.py'
Nov 25 09:42:19 compute-0 sudo[166581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:19 compute-0 python3.9[166583]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:19 compute-0 sudo[166581]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:19.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:19 compute-0 sudo[166735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjiihkdcnqqyheedzkxjrbvdbpmhzysv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063739.6295981-152-114221042605171/AnsiballZ_systemd_service.py'
Nov 25 09:42:19 compute-0 sudo[166735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:19 compute-0 sudo[166736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:42:19 compute-0 sudo[166736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:19 compute-0 sudo[166736]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:20 compute-0 python3.9[166743]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:20 compute-0 sudo[166735]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:42:20] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 25 09:42:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:42:20] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 25 09:42:20 compute-0 sudo[166914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otkvfyntjyjtvgwuelurfifodwbsinua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063740.17105-152-39702633650973/AnsiballZ_systemd_service.py'
Nov 25 09:42:20 compute-0 sudo[166914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:20 compute-0 ceph-mon[74207]: pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:20 compute-0 python3.9[166916]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:42:20 compute-0 sudo[166914]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:20.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:42:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:42:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:21.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:21 compute-0 sudo[167084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqjgzdkvpmaudtfyxdfzlcukqhejreak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063741.4666996-308-69751688168456/AnsiballZ_file.py'
Nov 25 09:42:21 compute-0 sudo[167084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:21 compute-0 python3.9[167086]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:21 compute-0 sudo[167084]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:22 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e69fe870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:22 compute-0 sudo[167237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzfqutbxejcvaswsgcpgvslqpmftopgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063742.074225-308-106694412962461/AnsiballZ_file.py'
Nov 25 09:42:22 compute-0 sudo[167237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:22 compute-0 python3.9[167239]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:22 compute-0 sudo[167237]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:22 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:22 compute-0 ceph-mon[74207]: pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:42:22 compute-0 sudo[167389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlkivksaeqhmsoymcuwuspwaueogvyuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063742.5015593-308-206942565300841/AnsiballZ_file.py'
Nov 25 09:42:22 compute-0 sudo[167389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:22 compute-0 python3.9[167391]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:22 compute-0 sudo[167389]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:22.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:23 compute-0 sudo[167541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmsfospvscsdtoqegbdrqafhijfbxvel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063742.9213033-308-131085470226852/AnsiballZ_file.py'
Nov 25 09:42:23 compute-0 sudo[167541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094223 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:42:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:23 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce4001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:42:23 compute-0 python3.9[167543]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:23 compute-0 sudo[167541]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:23 compute-0 sudo[167693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvauqbyvheeowaywmadqoxodfggxaogd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063743.3403125-308-195960307220628/AnsiballZ_file.py'
Nov 25 09:42:23 compute-0 sudo[167693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:23.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:23 compute-0 python3.9[167695]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:23 compute-0 sudo[167693]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:23 compute-0 sudo[167847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcguezroxmanvztmxckvswqrjrxfvtzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063743.7574198-308-197686836763983/AnsiballZ_file.py'
Nov 25 09:42:23 compute-0 sudo[167847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:24 compute-0 python3.9[167849]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:24 compute-0 sudo[167847]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:24 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce4001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:24 compute-0 sudo[167999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekfynsxjfpzdlbbylpghwbnmmajwobox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063744.1859746-308-93486661262588/AnsiballZ_file.py'
Nov 25 09:42:24 compute-0 sudo[167999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:24 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e69fe870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:24 compute-0 ceph-mon[74207]: pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:42:24 compute-0 python3.9[168001]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:24 compute-0 sudo[167999]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094224 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:42:24 compute-0 sudo[168151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rodotclsqionvjqhykcnyrfjuyuhfmtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063744.7454398-458-233748860575393/AnsiballZ_file.py'
Nov 25 09:42:24 compute-0 sudo[168151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:24.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:25 compute-0 python3.9[168153]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:25 compute-0 sudo[168151]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:25 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:42:25 compute-0 sudo[168303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isjvjjatjvihqxttupbqljveuwubvefn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063745.1631224-458-45438642758810/AnsiballZ_file.py'
Nov 25 09:42:25 compute-0 sudo[168303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:25 compute-0 python3.9[168305]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:25 compute-0 sudo[168303]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:25 compute-0 sudo[168456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hweaduytjketkahmdnlrxovowmtflhab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063745.6052942-458-108362112823103/AnsiballZ_file.py'
Nov 25 09:42:25 compute-0 sudo[168456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:25 compute-0 python3.9[168458]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:25 compute-0 sudo[168456]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:26 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce4008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:26 compute-0 sudo[168609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gysbfozbtzxvsoinnsihazxxfsqtvbux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063746.0239575-458-8011538781214/AnsiballZ_file.py'
Nov 25 09:42:26 compute-0 sudo[168609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:26 compute-0 python3.9[168611]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:26 compute-0 sudo[168609]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:26 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce4008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:26 compute-0 ceph-mon[74207]: pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:42:26 compute-0 sudo[168761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msauopkaifxoaeehxytclnnnpgouukxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063746.4461546-458-52250948204430/AnsiballZ_file.py'
Nov 25 09:42:26 compute-0 sudo[168761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:26 compute-0 python3.9[168763]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:26 compute-0 sudo[168761]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:26.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:26.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:26 compute-0 sudo[168913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmaygvpsquzsjvbeehiuzacqjvtrsxwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063746.8362236-458-17686341275472/AnsiballZ_file.py'
Nov 25 09:42:26 compute-0 sudo[168913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:27 compute-0 python3.9[168915]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:27 compute-0 sudo[168913]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:27 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e69fe870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:42:27 compute-0 sudo[169065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahknbbzlqvwcfyspfjmhrisonilimfkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063747.2413502-458-129713237746102/AnsiballZ_file.py'
Nov 25 09:42:27 compute-0 sudo[169065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:27 compute-0 python3.9[169067]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:27.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:27 compute-0 sudo[169065]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:28 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e69fe870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:28 compute-0 sudo[169219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duuzqngjeyiahqfauhquvthqetfjerxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063747.9575207-611-66937237258201/AnsiballZ_command.py'
Nov 25 09:42:28 compute-0 sudo[169219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:28 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:42:28 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8035 writes, 32K keys, 8035 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 8035 writes, 1713 syncs, 4.69 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8035 writes, 32K keys, 8035 commit groups, 1.0 writes per commit group, ingest: 20.90 MB, 0.03 MB/s
                                           Interval WAL: 8035 writes, 1713 syncs, 4.69 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:42:28 compute-0 python3.9[169221]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:28 compute-0 sudo[169219]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:28 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce4009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:28 compute-0 ceph-mon[74207]: pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:42:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:28.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:29 compute-0 python3.9[169373]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 09:42:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:29 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce4009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:29.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:29 compute-0 sudo[169524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdzskghoanxntkyerfjqunhgplgtlgtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063749.5030699-665-112506570283128/AnsiballZ_systemd_service.py'
Nov 25 09:42:29 compute-0 sudo[169524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:29 compute-0 python3.9[169526]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:42:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:42:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:42:29 compute-0 systemd[1]: Reloading.
Nov 25 09:42:30 compute-0 systemd-rc-local-generator[169549]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:42:30 compute-0 systemd-sysv-generator[169552]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:42:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:30 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:30 compute-0 sudo[169524]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:42:30] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 25 09:42:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:42:30] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 25 09:42:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:30 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e8f8a440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:30 compute-0 ceph-mon[74207]: pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:42:30 compute-0 sudo[169712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnhtfpyhlcqlykcukllxdsmmzrkzmxnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063750.432518-689-209720119834243/AnsiballZ_command.py'
Nov 25 09:42:30 compute-0 sudo[169712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:30 compute-0 python3.9[169714]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:30 compute-0 sudo[169712]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:30.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:31 compute-0 sudo[169865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfvaywwhnesohckrahnrhgbulaewygyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063750.894701-689-245085937614519/AnsiballZ_command.py'
Nov 25 09:42:31 compute-0 sudo[169865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:31 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:31 compute-0 python3.9[169867]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 25 09:42:31 compute-0 sudo[169865]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:31 compute-0 sudo[170018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moyhtccnlmabomctnglqcjyjdtwgnotf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063751.3334196-689-167830163607750/AnsiballZ_command.py'
Nov 25 09:42:31 compute-0 sudo[170018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:31.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:31 compute-0 python3.9[170020]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:31 compute-0 sudo[170018]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:31 compute-0 sudo[170173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrgrkbuycytfoayeluchhhvvcvyoydik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063751.766461-689-267711168245431/AnsiballZ_command.py'
Nov 25 09:42:31 compute-0 sudo[170173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:32 compute-0 python3.9[170175]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:32 compute-0 sudo[170173]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:32 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:32 compute-0 sudo[170326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaefhdxbamioabkdoqqmqkhrjjckxoya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063752.190871-689-187174916047682/AnsiballZ_command.py'
Nov 25 09:42:32 compute-0 sudo[170326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:32 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:32 compute-0 ceph-mon[74207]: pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 25 09:42:32 compute-0 python3.9[170328]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:32 compute-0 sudo[170326]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:32 compute-0 sudo[170479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ditxlmehnyrycdyifotehwxfefxcbbpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063752.6233048-689-43248320397692/AnsiballZ_command.py'
Nov 25 09:42:32 compute-0 sudo[170479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:32 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:42:32 compute-0 python3.9[170481]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:32 compute-0 sudo[170479]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:32.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:33 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:33 compute-0 sudo[170632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fksyhwzznkbzmwbnuyxcjabbfwrzfnqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063753.0705824-689-172274254073045/AnsiballZ_command.py'
Nov 25 09:42:33 compute-0 sudo[170632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:33 compute-0 python3.9[170634]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:42:33 compute-0 sudo[170632]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:33.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:33 compute-0 podman[170662]: 2025-11-25 09:42:33.996782214 +0000 UTC m=+0.061373836 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:42:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:34 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:34 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0003660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:34 compute-0 ceph-mon[74207]: pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:34 compute-0 sudo[170803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptprgjdfayowghahxeqpwiuotdphsipr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063754.558877-851-123937908187567/AnsiballZ_getent.py'
Nov 25 09:42:34 compute-0 sudo[170803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:34.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:34 compute-0 python3.9[170805]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 25 09:42:35 compute-0 sudo[170803]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:35 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400c240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:35 compute-0 sudo[170956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrhbfldgshtpwzihunrdlhiyxevgtaon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063755.1958604-875-276037310133371/AnsiballZ_group.py'
Nov 25 09:42:35 compute-0 sudo[170956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:35.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:35 compute-0 python3.9[170958]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 09:42:35 compute-0 groupadd[170959]: group added to /etc/group: name=libvirt, GID=42473
Nov 25 09:42:35 compute-0 groupadd[170959]: group added to /etc/gshadow: name=libvirt
Nov 25 09:42:35 compute-0 groupadd[170959]: new group: name=libvirt, GID=42473
Nov 25 09:42:35 compute-0 sudo[170956]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:35 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:42:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:35 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:42:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:35 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:42:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:36 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e8f8a440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:36 compute-0 sudo[171116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuetanzhdxfuijbobfgvlungjfnfhdkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063755.8818202-899-156582126749100/AnsiballZ_user.py'
Nov 25 09:42:36 compute-0 sudo[171116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:36 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e8f8a440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:36 compute-0 python3.9[171118]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 09:42:36 compute-0 useradd[171120]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 09:42:36 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:42:36 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:42:36 compute-0 ceph-mon[74207]: pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:36 compute-0 sudo[171116]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:36.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:36.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:36.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:36.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:36.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:37 compute-0 sudo[171277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddugcwbetlqqnagrwwcjleqlovqrzuds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063756.9191067-932-139525244356911/AnsiballZ_setup.py'
Nov 25 09:42:37 compute-0 sudo[171277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:37 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0003660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:37 compute-0 python3.9[171279]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:42:37 compute-0 sudo[171277]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:37.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:37 compute-0 sudo[171362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsttsecpbuiqjaahsdachnjpvtayucgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063756.9191067-932-139525244356911/AnsiballZ_dnf.py'
Nov 25 09:42:37 compute-0 sudo[171362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:38 compute-0 python3.9[171365]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:42:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:38 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400c240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:38 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400c240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:38 compute-0 ceph-mon[74207]: pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:38 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:42:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:38.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:39 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e8f8a440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:39.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:39 compute-0 sudo[171372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:42:39 compute-0 sudo[171372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:39 compute-0 sudo[171372]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:40 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:42:40] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 25 09:42:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:42:40] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Nov 25 09:42:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:40 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400cd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:40 compute-0 ceph-mon[74207]: pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:40.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:40 compute-0 podman[171402]: 2025-11-25 09:42:40.999503155 +0000 UTC m=+0.064705940 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 09:42:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:41 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400cd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:42:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:41.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:42 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e8f8a440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:42 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:42 compute-0 ceph-mon[74207]: pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:42:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:42.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:43 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400cd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:43.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:44 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400cd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:44 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e8f8a440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:44 compute-0 ceph-mon[74207]: pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094244 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:42:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:42:44
Nov 25 09:42:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:42:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:42:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', '.nfs', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.control']
Nov 25 09:42:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:42:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:42:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:42:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:42:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:42:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:42:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:42:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:42:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:42:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:45.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:42:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:42:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:42:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:42:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:42:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:42:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:42:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:42:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:42:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:42:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:45 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:42:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:42:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:45.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:42:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:46 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400cd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:46 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400cd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:46 compute-0 ceph-mon[74207]: pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:46.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:46.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:46.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:46.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:47.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:47 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e8f8a440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:47.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:48 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:48 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400cd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:48 compute-0 ceph-mon[74207]: pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:42:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:49.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:49 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400cd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:49.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:50 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e8f8a440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:42:50] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Nov 25 09:42:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:42:50] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Nov 25 09:42:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:50 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:50 compute-0 ceph-mon[74207]: pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:51.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:51 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400cd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:51.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:52 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400cd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:52 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e8f8a440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:52 compute-0 ceph-mon[74207]: pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:42:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:53.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:53 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:42:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:53.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:54 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3cfc002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:54 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3cfc002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:54 compute-0 ceph-mon[74207]: pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:42:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:55.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:55 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3cf8003a50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:42:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:42:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:42:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:55.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:42:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:56 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:56 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:56 compute-0 ceph-mon[74207]: pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:42:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:56.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:56.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:56.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:42:56.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:42:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:57 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:42:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:57.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:42:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:58 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3cf8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:58 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400cd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:58 compute-0 ceph-mon[74207]: pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:42:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:59.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:42:59 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400cd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:42:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:42:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:42:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:42:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:59.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:42:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:42:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:42:59 compute-0 sudo[171608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:42:59 compute-0 sudo[171608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:42:59 compute-0 sudo[171608]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:00 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:43:00] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:43:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:43:00] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:43:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:00 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:00 compute-0 ceph-mon[74207]: pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:43:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:01.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:01 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:43:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:01.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:02 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:02 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55e2e8f8a440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:02 compute-0 ceph-mon[74207]: pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:43:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:03.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:03 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bf3c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:03.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:03 compute-0 sudo[171656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:43:03 compute-0 sudo[171656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:03 compute-0 sudo[171656]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:03 compute-0 sudo[171681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:43:03 compute-0 sudo[171681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:43:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:43:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:04 compute-0 sudo[171681]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:04 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:43:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:43:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:43:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:43:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:43:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:43:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:43:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:43:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:43:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:43:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:43:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:43:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:04 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:04 compute-0 sudo[171740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:43:04 compute-0 sudo[171740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:04 compute-0 sudo[171740]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:04 compute-0 sudo[171771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:43:04 compute-0 sudo[171771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:04 compute-0 podman[171764]: 2025-11-25 09:43:04.550534256 +0000 UTC m=+0.050660030 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 25 09:43:04 compute-0 ceph-mon[74207]: pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:43:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:43:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:43:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:43:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:43:04 compute-0 podman[171836]: 2025-11-25 09:43:04.842111314 +0000 UTC m=+0.027143822 container create c2c07e4e3f6e37a1e9f58bcc9288903ffb6f0c52133eafec99bad19cb348bac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:43:04 compute-0 systemd[1]: Started libpod-conmon-c2c07e4e3f6e37a1e9f58bcc9288903ffb6f0c52133eafec99bad19cb348bac5.scope.
Nov 25 09:43:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:43:04 compute-0 podman[171836]: 2025-11-25 09:43:04.896326399 +0000 UTC m=+0.081358926 container init c2c07e4e3f6e37a1e9f58bcc9288903ffb6f0c52133eafec99bad19cb348bac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_williams, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:43:04 compute-0 podman[171836]: 2025-11-25 09:43:04.901999448 +0000 UTC m=+0.087031956 container start c2c07e4e3f6e37a1e9f58bcc9288903ffb6f0c52133eafec99bad19cb348bac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_williams, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:43:04 compute-0 podman[171836]: 2025-11-25 09:43:04.90310904 +0000 UTC m=+0.088141549 container attach c2c07e4e3f6e37a1e9f58bcc9288903ffb6f0c52133eafec99bad19cb348bac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_williams, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:43:04 compute-0 dreamy_williams[171850]: 167 167
Nov 25 09:43:04 compute-0 systemd[1]: libpod-c2c07e4e3f6e37a1e9f58bcc9288903ffb6f0c52133eafec99bad19cb348bac5.scope: Deactivated successfully.
Nov 25 09:43:04 compute-0 podman[171836]: 2025-11-25 09:43:04.906636944 +0000 UTC m=+0.091669451 container died c2c07e4e3f6e37a1e9f58bcc9288903ffb6f0c52133eafec99bad19cb348bac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_williams, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5196dd3a7f1a45d0e291ca42be3d12ead232bb1532b9f66f63f9b710bbdf4ee0-merged.mount: Deactivated successfully.
Nov 25 09:43:04 compute-0 podman[171836]: 2025-11-25 09:43:04.830149796 +0000 UTC m=+0.015182324 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:43:04 compute-0 podman[171836]: 2025-11-25 09:43:04.927801859 +0000 UTC m=+0.112834377 container remove c2c07e4e3f6e37a1e9f58bcc9288903ffb6f0c52133eafec99bad19cb348bac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_williams, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:43:04 compute-0 systemd[1]: libpod-conmon-c2c07e4e3f6e37a1e9f58bcc9288903ffb6f0c52133eafec99bad19cb348bac5.scope: Deactivated successfully.
Nov 25 09:43:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:05.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:05 compute-0 podman[171873]: 2025-11-25 09:43:05.056145764 +0000 UTC m=+0.027548395 container create 336d0aa507521d35533ae602cf563e489a0d0185c09fc7b259ad937e1efcbb51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:43:05 compute-0 systemd[1]: Started libpod-conmon-336d0aa507521d35533ae602cf563e489a0d0185c09fc7b259ad937e1efcbb51.scope.
Nov 25 09:43:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a264061a41de099e6623e7f27818722717861a281161e29e2f3b5a536d51632d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a264061a41de099e6623e7f27818722717861a281161e29e2f3b5a536d51632d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a264061a41de099e6623e7f27818722717861a281161e29e2f3b5a536d51632d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a264061a41de099e6623e7f27818722717861a281161e29e2f3b5a536d51632d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a264061a41de099e6623e7f27818722717861a281161e29e2f3b5a536d51632d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:05 compute-0 podman[171873]: 2025-11-25 09:43:05.123052113 +0000 UTC m=+0.094454764 container init 336d0aa507521d35533ae602cf563e489a0d0185c09fc7b259ad937e1efcbb51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 09:43:05 compute-0 podman[171873]: 2025-11-25 09:43:05.128391072 +0000 UTC m=+0.099793703 container start 336d0aa507521d35533ae602cf563e489a0d0185c09fc7b259ad937e1efcbb51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_moser, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:43:05 compute-0 podman[171873]: 2025-11-25 09:43:05.129460458 +0000 UTC m=+0.100863089 container attach 336d0aa507521d35533ae602cf563e489a0d0185c09fc7b259ad937e1efcbb51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 09:43:05 compute-0 podman[171873]: 2025-11-25 09:43:05.045335577 +0000 UTC m=+0.016738228 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:43:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:05 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:05 compute-0 tender_moser[171886]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:43:05 compute-0 tender_moser[171886]: --> All data devices are unavailable
Nov 25 09:43:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:43:05.370 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:43:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:43:05.371 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:43:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:43:05.371 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:43:05 compute-0 systemd[1]: libpod-336d0aa507521d35533ae602cf563e489a0d0185c09fc7b259ad937e1efcbb51.scope: Deactivated successfully.
Nov 25 09:43:05 compute-0 podman[171873]: 2025-11-25 09:43:05.389169345 +0000 UTC m=+0.360571976 container died 336d0aa507521d35533ae602cf563e489a0d0185c09fc7b259ad937e1efcbb51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_moser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a264061a41de099e6623e7f27818722717861a281161e29e2f3b5a536d51632d-merged.mount: Deactivated successfully.
Nov 25 09:43:05 compute-0 podman[171873]: 2025-11-25 09:43:05.411771651 +0000 UTC m=+0.383174282 container remove 336d0aa507521d35533ae602cf563e489a0d0185c09fc7b259ad937e1efcbb51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_moser, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:43:05 compute-0 systemd[1]: libpod-conmon-336d0aa507521d35533ae602cf563e489a0d0185c09fc7b259ad937e1efcbb51.scope: Deactivated successfully.
Nov 25 09:43:05 compute-0 sudo[171771]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:05 compute-0 sudo[171910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:43:05 compute-0 sudo[171910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:05 compute-0 sudo[171910]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:05 compute-0 sudo[171935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:43:05 compute-0 sudo[171935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:05.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:05 compute-0 podman[171991]: 2025-11-25 09:43:05.814256872 +0000 UTC m=+0.027311629 container create 48e7b235e8ee47514c38652314430c4b6777db4792c6d61229ab13f5ed022241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_heyrovsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:43:05 compute-0 systemd[1]: Started libpod-conmon-48e7b235e8ee47514c38652314430c4b6777db4792c6d61229ab13f5ed022241.scope.
Nov 25 09:43:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:43:05 compute-0 podman[171991]: 2025-11-25 09:43:05.880638752 +0000 UTC m=+0.093693509 container init 48e7b235e8ee47514c38652314430c4b6777db4792c6d61229ab13f5ed022241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_heyrovsky, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 25 09:43:05 compute-0 podman[171991]: 2025-11-25 09:43:05.8855427 +0000 UTC m=+0.098597457 container start 48e7b235e8ee47514c38652314430c4b6777db4792c6d61229ab13f5ed022241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_heyrovsky, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:43:05 compute-0 podman[171991]: 2025-11-25 09:43:05.887197741 +0000 UTC m=+0.100252498 container attach 48e7b235e8ee47514c38652314430c4b6777db4792c6d61229ab13f5ed022241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:43:05 compute-0 nervous_heyrovsky[172004]: 167 167
Nov 25 09:43:05 compute-0 systemd[1]: libpod-48e7b235e8ee47514c38652314430c4b6777db4792c6d61229ab13f5ed022241.scope: Deactivated successfully.
Nov 25 09:43:05 compute-0 podman[171991]: 2025-11-25 09:43:05.889839985 +0000 UTC m=+0.102894742 container died 48e7b235e8ee47514c38652314430c4b6777db4792c6d61229ab13f5ed022241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:43:05 compute-0 podman[171991]: 2025-11-25 09:43:05.802704635 +0000 UTC m=+0.015759413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-37126e6ca132860eb8767130409a454858af471bbd868a7c53fc1303350cb079-merged.mount: Deactivated successfully.
Nov 25 09:43:05 compute-0 podman[171991]: 2025-11-25 09:43:05.908276093 +0000 UTC m=+0.121330851 container remove 48e7b235e8ee47514c38652314430c4b6777db4792c6d61229ab13f5ed022241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:43:05 compute-0 systemd[1]: libpod-conmon-48e7b235e8ee47514c38652314430c4b6777db4792c6d61229ab13f5ed022241.scope: Deactivated successfully.
Nov 25 09:43:06 compute-0 podman[172027]: 2025-11-25 09:43:06.029088145 +0000 UTC m=+0.027772868 container create 9f84af8545bdd1b24f353f101b40c8f58053465a47aa6c2626839fc1d4fbedd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:43:06 compute-0 systemd[1]: Started libpod-conmon-9f84af8545bdd1b24f353f101b40c8f58053465a47aa6c2626839fc1d4fbedd7.scope.
Nov 25 09:43:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be6ed125510ec68ce5fcf9af4ef701d6c4bc6a5ceb73e85186f2572129c1d79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be6ed125510ec68ce5fcf9af4ef701d6c4bc6a5ceb73e85186f2572129c1d79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be6ed125510ec68ce5fcf9af4ef701d6c4bc6a5ceb73e85186f2572129c1d79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be6ed125510ec68ce5fcf9af4ef701d6c4bc6a5ceb73e85186f2572129c1d79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:06 compute-0 podman[172027]: 2025-11-25 09:43:06.08786912 +0000 UTC m=+0.086553843 container init 9f84af8545bdd1b24f353f101b40c8f58053465a47aa6c2626839fc1d4fbedd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:43:06 compute-0 podman[172027]: 2025-11-25 09:43:06.096119027 +0000 UTC m=+0.094803752 container start 9f84af8545bdd1b24f353f101b40c8f58053465a47aa6c2626839fc1d4fbedd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_almeida, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:43:06 compute-0 podman[172027]: 2025-11-25 09:43:06.097203022 +0000 UTC m=+0.095887746 container attach 9f84af8545bdd1b24f353f101b40c8f58053465a47aa6c2626839fc1d4fbedd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 25 09:43:06 compute-0 podman[172027]: 2025-11-25 09:43:06.018165815 +0000 UTC m=+0.016850559 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:43:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:06 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d08002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:06 compute-0 interesting_almeida[172040]: {
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:     "1": [
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:         {
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             "devices": [
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "/dev/loop3"
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             ],
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             "lv_name": "ceph_lv0",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             "lv_size": "21470642176",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             "name": "ceph_lv0",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             "tags": {
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.cluster_name": "ceph",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.crush_device_class": "",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.encrypted": "0",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.osd_id": "1",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.type": "block",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.vdo": "0",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:                 "ceph.with_tpm": "0"
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             },
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             "type": "block",
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:             "vg_name": "ceph_vg0"
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:         }
Nov 25 09:43:06 compute-0 interesting_almeida[172040]:     ]
Nov 25 09:43:06 compute-0 interesting_almeida[172040]: }
Nov 25 09:43:06 compute-0 systemd[1]: libpod-9f84af8545bdd1b24f353f101b40c8f58053465a47aa6c2626839fc1d4fbedd7.scope: Deactivated successfully.
Nov 25 09:43:06 compute-0 podman[172027]: 2025-11-25 09:43:06.330449062 +0000 UTC m=+0.329133806 container died 9f84af8545bdd1b24f353f101b40c8f58053465a47aa6c2626839fc1d4fbedd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_almeida, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8be6ed125510ec68ce5fcf9af4ef701d6c4bc6a5ceb73e85186f2572129c1d79-merged.mount: Deactivated successfully.
Nov 25 09:43:06 compute-0 podman[172027]: 2025-11-25 09:43:06.351803305 +0000 UTC m=+0.350488029 container remove 9f84af8545bdd1b24f353f101b40c8f58053465a47aa6c2626839fc1d4fbedd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:43:06 compute-0 systemd[1]: libpod-conmon-9f84af8545bdd1b24f353f101b40c8f58053465a47aa6c2626839fc1d4fbedd7.scope: Deactivated successfully.
Nov 25 09:43:06 compute-0 sudo[171935]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:06 compute-0 sudo[172059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:43:06 compute-0 sudo[172059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:06 compute-0 sudo[172059]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:06 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:06 compute-0 sudo[172084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:43:06 compute-0 sudo[172084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:06 compute-0 ceph-mon[74207]: pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:06 compute-0 podman[172141]: 2025-11-25 09:43:06.841096146 +0000 UTC m=+0.037086562 container create a24fed99b55c2a08b7ec52efdd0e1de17584cdc69b50a28c474c8206c22a5be7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:43:06 compute-0 systemd[1]: Started libpod-conmon-a24fed99b55c2a08b7ec52efdd0e1de17584cdc69b50a28c474c8206c22a5be7.scope.
Nov 25 09:43:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:43:06 compute-0 podman[172141]: 2025-11-25 09:43:06.90350282 +0000 UTC m=+0.099493256 container init a24fed99b55c2a08b7ec52efdd0e1de17584cdc69b50a28c474c8206c22a5be7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_payne, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:43:06 compute-0 podman[172141]: 2025-11-25 09:43:06.909698705 +0000 UTC m=+0.105689121 container start a24fed99b55c2a08b7ec52efdd0e1de17584cdc69b50a28c474c8206c22a5be7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:43:06 compute-0 podman[172141]: 2025-11-25 09:43:06.910863241 +0000 UTC m=+0.106853657 container attach a24fed99b55c2a08b7ec52efdd0e1de17584cdc69b50a28c474c8206c22a5be7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_payne, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:43:06 compute-0 suspicious_payne[172154]: 167 167
Nov 25 09:43:06 compute-0 systemd[1]: libpod-a24fed99b55c2a08b7ec52efdd0e1de17584cdc69b50a28c474c8206c22a5be7.scope: Deactivated successfully.
Nov 25 09:43:06 compute-0 conmon[172154]: conmon a24fed99b55c2a08b7ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a24fed99b55c2a08b7ec52efdd0e1de17584cdc69b50a28c474c8206c22a5be7.scope/container/memory.events
Nov 25 09:43:06 compute-0 podman[172141]: 2025-11-25 09:43:06.914259325 +0000 UTC m=+0.110249741 container died a24fed99b55c2a08b7ec52efdd0e1de17584cdc69b50a28c474c8206c22a5be7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:43:06 compute-0 podman[172141]: 2025-11-25 09:43:06.827359361 +0000 UTC m=+0.023349797 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-07381529e2cb68df65584ca6e9b267b06abee5307330c1396f5ae7519a7d3277-merged.mount: Deactivated successfully.
Nov 25 09:43:06 compute-0 podman[172141]: 2025-11-25 09:43:06.937276875 +0000 UTC m=+0.133267292 container remove a24fed99b55c2a08b7ec52efdd0e1de17584cdc69b50a28c474c8206c22a5be7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_payne, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 25 09:43:06 compute-0 systemd[1]: libpod-conmon-a24fed99b55c2a08b7ec52efdd0e1de17584cdc69b50a28c474c8206c22a5be7.scope: Deactivated successfully.
Nov 25 09:43:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:06.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:06.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:06.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:06.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:07.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:07 compute-0 podman[172176]: 2025-11-25 09:43:07.070378142 +0000 UTC m=+0.031375943 container create afa701b3d8977044bd5bc9a0351feff0c9ab329c266fbb9af78fce032650571b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:43:07 compute-0 systemd[1]: Started libpod-conmon-afa701b3d8977044bd5bc9a0351feff0c9ab329c266fbb9af78fce032650571b.scope.
Nov 25 09:43:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc062d161235c20f7ddd1a1a3f31ef2750c1b68354731fcd7fc76605aecfbcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc062d161235c20f7ddd1a1a3f31ef2750c1b68354731fcd7fc76605aecfbcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc062d161235c20f7ddd1a1a3f31ef2750c1b68354731fcd7fc76605aecfbcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc062d161235c20f7ddd1a1a3f31ef2750c1b68354731fcd7fc76605aecfbcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:43:07 compute-0 podman[172176]: 2025-11-25 09:43:07.134568399 +0000 UTC m=+0.095566190 container init afa701b3d8977044bd5bc9a0351feff0c9ab329c266fbb9af78fce032650571b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:43:07 compute-0 podman[172176]: 2025-11-25 09:43:07.140152299 +0000 UTC m=+0.101150090 container start afa701b3d8977044bd5bc9a0351feff0c9ab329c266fbb9af78fce032650571b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_fermat, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:43:07 compute-0 podman[172176]: 2025-11-25 09:43:07.145917583 +0000 UTC m=+0.106915394 container attach afa701b3d8977044bd5bc9a0351feff0c9ab329c266fbb9af78fce032650571b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:43:07 compute-0 podman[172176]: 2025-11-25 09:43:07.058642259 +0000 UTC m=+0.019640071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:43:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:07 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:07.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:07 compute-0 goofy_fermat[172190]: {}
Nov 25 09:43:07 compute-0 lvm[172267]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:43:07 compute-0 lvm[172267]: VG ceph_vg0 finished
Nov 25 09:43:07 compute-0 systemd[1]: libpod-afa701b3d8977044bd5bc9a0351feff0c9ab329c266fbb9af78fce032650571b.scope: Deactivated successfully.
Nov 25 09:43:07 compute-0 conmon[172190]: conmon afa701b3d8977044bd5b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-afa701b3d8977044bd5bc9a0351feff0c9ab329c266fbb9af78fce032650571b.scope/container/memory.events
Nov 25 09:43:07 compute-0 podman[172176]: 2025-11-25 09:43:07.636674956 +0000 UTC m=+0.597672747 container died afa701b3d8977044bd5bc9a0351feff0c9ab329c266fbb9af78fce032650571b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_fermat, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 09:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cc062d161235c20f7ddd1a1a3f31ef2750c1b68354731fcd7fc76605aecfbcd-merged.mount: Deactivated successfully.
Nov 25 09:43:07 compute-0 podman[172176]: 2025-11-25 09:43:07.662158606 +0000 UTC m=+0.623156407 container remove afa701b3d8977044bd5bc9a0351feff0c9ab329c266fbb9af78fce032650571b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 09:43:07 compute-0 systemd[1]: libpod-conmon-afa701b3d8977044bd5bc9a0351feff0c9ab329c266fbb9af78fce032650571b.scope: Deactivated successfully.
Nov 25 09:43:07 compute-0 sudo[172084]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:43:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:43:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:07 compute-0 sudo[172279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:43:07 compute-0 sudo[172279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:07 compute-0 sudo[172279]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:08 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:08 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d08005270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:08 compute-0 ceph-mon[74207]: pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:43:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:43:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:09.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:43:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:09 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:09.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:10 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:43:10] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:43:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:43:10] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Nov 25 09:43:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:10 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:10 compute-0 ceph-mon[74207]: pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:11.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:11 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d080053f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:43:11 compute-0 kernel: SELinux:  Converting 2776 SID table entries...
Nov 25 09:43:11 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:43:11 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:43:11 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:43:11 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:43:11 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:43:11 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:43:11 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:43:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:11.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:11 compute-0 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 25 09:43:11 compute-0 podman[172316]: 2025-11-25 09:43:11.997751162 +0000 UTC m=+0.057948787 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 25 09:43:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:12 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:12 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d080053f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:12 compute-0 ceph-mon[74207]: pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:43:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:13.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:13 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:13.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:14 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d080053f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:14 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:14 compute-0 ceph-mon[74207]: pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:43:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:43:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:43:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:43:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:43:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:43:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:43:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:43:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:15 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d080053f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:15.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:43:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:16 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:16 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d080053f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:16 compute-0 ceph-mon[74207]: pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:16.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:16.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:16.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:16.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:17.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:17 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:17.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:17 compute-0 ceph-mon[74207]: pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:18 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d080053f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:18 compute-0 kernel: SELinux:  Converting 2776 SID table entries...
Nov 25 09:43:18 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:43:18 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:43:18 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:43:18 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:43:18 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:43:18 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:43:18 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:43:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:18 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:19.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:19 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d080053f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:19.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:20 compute-0 sudo[172355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:20 compute-0 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 25 09:43:20 compute-0 sudo[172355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:20 compute-0 sudo[172355]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:20 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:43:20] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 25 09:43:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:43:20] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Nov 25 09:43:20 compute-0 ceph-mon[74207]: pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:20 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d080053f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:21.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:43:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:21.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:22 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d080053f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:22 compute-0 ceph-mon[74207]: pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:43:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:22 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:23.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:23 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d080053f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:23.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:24 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:24 compute-0 ceph-mon[74207]: pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:24 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d08007590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:25.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:25 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:25.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:26 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d08007590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:26 compute-0 ceph-mon[74207]: pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:26 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:26.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:26.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:26.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:26.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:27.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:27 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d08007590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:27.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:28 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:28 compute-0 ceph-mon[74207]: pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:28 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d08007590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:29.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:29 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:29.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:43:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:43:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:30 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d08007590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:43:30] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 25 09:43:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:43:30] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 25 09:43:30 compute-0 ceph-mon[74207]: pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:43:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:30 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:31.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:31 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d08007590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:43:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:31.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:32 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:32 compute-0 ceph-mon[74207]: pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:43:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:32 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d08007590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:43:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:33.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:43:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:33 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:33.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:34 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10002660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:34 compute-0 ceph-mon[74207]: pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:34 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:34 compute-0 podman[179524]: 2025-11-25 09:43:34.98445582 +0000 UTC m=+0.042765408 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 09:43:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:35.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:35 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:35.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:36 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d140048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:36 compute-0 ceph-mon[74207]: pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:36 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:36.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:36.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:36.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:36.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:37.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:37 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c1fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:37.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:38 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:38 compute-0 ceph-mon[74207]: pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:38 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:39.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:39 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:39.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:40 compute-0 sudo[184426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:43:40 compute-0 sudo[184426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:43:40 compute-0 sudo[184426]: pam_unix(sudo:session): session closed for user root
Nov 25 09:43:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:40 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c1fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:43:40] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 25 09:43:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:43:40] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Nov 25 09:43:40 compute-0 ceph-mon[74207]: pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:40 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c1fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:41.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:41 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 09:43:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:41.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:42 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:42 compute-0 ceph-mon[74207]: pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 09:43:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:42 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:42 compute-0 podman[187121]: 2025-11-25 09:43:42.99677424 +0000 UTC m=+0.057915696 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 09:43:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:43.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:43 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 25 09:43:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:43.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:44 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:44 compute-0 ceph-mon[74207]: pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 25 09:43:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:44 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:43:44
Nov 25 09:43:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:43:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:43:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['backups', '.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.nfs', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'volumes']
Nov 25 09:43:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:43:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:43:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:43:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:43:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:43:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:43:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:43:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:43:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:43:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:43:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:43:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:43:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:43:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:43:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:43:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:43:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:43:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:43:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:43:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:45 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 25 09:43:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:43:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:45.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:46 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c22e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:46 compute-0 ceph-mon[74207]: pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 25 09:43:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:46 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:46.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:46.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:46.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:46.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:47.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:47 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 09:43:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:47.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:48 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d140058c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:48 compute-0 ceph-mon[74207]: pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 09:43:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:48 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c2480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:43:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:49.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:43:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:49 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 25 09:43:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:49.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:50 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:43:50] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Nov 25 09:43:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:43:50] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Nov 25 09:43:50 compute-0 ceph-mon[74207]: pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 25 09:43:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:50 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d140058c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:51 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c2480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 09:43:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:51.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:52 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:52 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:52 compute-0 ceph-mon[74207]: pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 09:43:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:43:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:43:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:53 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:53.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:54 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Nov 25 09:43:54 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:43:54 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 09:43:54 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:43:54 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:43:54 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:43:54 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:43:54 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:43:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:54 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c2480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:54 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400d620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:54 compute-0 ceph-mon[74207]: pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:54 compute-0 groupadd[189291]: group added to /etc/group: name=dnsmasq, GID=991
Nov 25 09:43:54 compute-0 groupadd[189291]: group added to /etc/gshadow: name=dnsmasq
Nov 25 09:43:54 compute-0 groupadd[189291]: new group: name=dnsmasq, GID=991
Nov 25 09:43:54 compute-0 useradd[189298]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 25 09:43:54 compute-0 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Nov 25 09:43:54 compute-0 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 25 09:43:54 compute-0 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Nov 25 09:43:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:55.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:55 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:43:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:43:55 compute-0 groupadd[189312]: group added to /etc/group: name=clevis, GID=990
Nov 25 09:43:55 compute-0 groupadd[189312]: group added to /etc/gshadow: name=clevis
Nov 25 09:43:55 compute-0 groupadd[189312]: new group: name=clevis, GID=990
Nov 25 09:43:55 compute-0 useradd[189319]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 25 09:43:55 compute-0 usermod[189329]: add 'clevis' to group 'tss'
Nov 25 09:43:55 compute-0 usermod[189329]: add 'clevis' to shadow group 'tss'
Nov 25 09:43:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:55.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:56 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:56 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:56 compute-0 ceph-mon[74207]: pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:56.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:56.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:56.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:43:56.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:43:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:57.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:57 compute-0 polkitd[43345]: Reloading rules
Nov 25 09:43:57 compute-0 polkitd[43345]: Collecting garbage unconditionally...
Nov 25 09:43:57 compute-0 polkitd[43345]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 09:43:57 compute-0 polkitd[43345]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 09:43:57 compute-0 polkitd[43345]: Finished loading, compiling and executing 3 rules
Nov 25 09:43:57 compute-0 polkitd[43345]: Reloading rules
Nov 25 09:43:57 compute-0 polkitd[43345]: Collecting garbage unconditionally...
Nov 25 09:43:57 compute-0 polkitd[43345]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 09:43:57 compute-0 polkitd[43345]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 09:43:57 compute-0 polkitd[43345]: Finished loading, compiling and executing 3 rules
Nov 25 09:43:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:57 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c24a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:43:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:57.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:43:57 compute-0 groupadd[189519]: group added to /etc/group: name=ceph, GID=167
Nov 25 09:43:57 compute-0 groupadd[189519]: group added to /etc/gshadow: name=ceph
Nov 25 09:43:57 compute-0 groupadd[189519]: new group: name=ceph, GID=167
Nov 25 09:43:57 compute-0 useradd[189525]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 25 09:43:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:43:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:58 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:58 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:58 compute-0 ceph-mon[74207]: pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:43:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:43:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:43:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:43:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:43:59 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:43:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:43:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:43:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:59.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:43:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:43:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:44:00 compute-0 sudo[190123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:00 compute-0 sudo[190123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:00 compute-0 sudo[190123]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:00 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c24c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:44:00] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:44:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:44:00] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:44:00 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 25 09:44:00 compute-0 sshd[962]: Received signal 15; terminating.
Nov 25 09:44:00 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 25 09:44:00 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 25 09:44:00 compute-0 systemd[1]: sshd.service: Consumed 1.449s CPU time, read 32.0K from disk, written 0B to disk.
Nov 25 09:44:00 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 25 09:44:00 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 25 09:44:00 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:44:00 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:44:00 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:44:00 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 25 09:44:00 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 25 09:44:00 compute-0 sshd[190248]: Server listening on 0.0.0.0 port 22.
Nov 25 09:44:00 compute-0 sshd[190248]: Server listening on :: port 22.
Nov 25 09:44:00 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 25 09:44:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:00 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:00 compute-0 ceph-mon[74207]: pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:44:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:01.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:01 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d1c002990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:44:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:44:01 compute-0 systemd[1]: Reloading.
Nov 25 09:44:01 compute-0 systemd-sysv-generator[190502]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:01.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:01 compute-0 systemd-rc-local-generator[190499]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:01 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:44:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:02 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:02 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:02 compute-0 ceph-mon[74207]: pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:44:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:03.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:44:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:03 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:03.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:03 compute-0 sudo[171362]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:04 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d1c002990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:04 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c2500 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:04 compute-0 ceph-mon[74207]: pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:05.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:05 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:44:05.372 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:44:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:44:05.372 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:44:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:44:05.372 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:44:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:05.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:05 compute-0 podman[197404]: 2025-11-25 09:44:05.979834952 +0000 UTC m=+0.044340023 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:44:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:06 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:06 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d1c002990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:06 compute-0 ceph-mon[74207]: pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:06.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:06.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:06.995Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:06.995Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:44:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:44:07 compute-0 systemd[1]: man-db-cache-update.service: Consumed 7.121s CPU time.
Nov 25 09:44:07 compute-0 systemd[1]: run-r25ffc49473c247da8c2cb7942b966e13.service: Deactivated successfully.
Nov 25 09:44:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:07 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c2520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:07.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:07 compute-0 sudo[198948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:44:07 compute-0 sudo[198948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:07 compute-0 sudo[198948]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:07 compute-0 sudo[198973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:44:07 compute-0 sudo[198973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:08 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:08 compute-0 sudo[198973]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:44:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:44:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:44:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:44:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:44:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:44:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:44:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:44:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:44:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:44:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:44:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:44:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:44:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:44:08 compute-0 sudo[199027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:44:08 compute-0 sudo[199027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:08 compute-0 sudo[199027]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:08 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:08 compute-0 sudo[199052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:44:08 compute-0 sudo[199052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:08 compute-0 ceph-mon[74207]: pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:44:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:44:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:44:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:44:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:44:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:44:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:44:08 compute-0 podman[199109]: 2025-11-25 09:44:08.831705321 +0000 UTC m=+0.028870704 container create 31c99855fffc68360ff5d5107e9f9e7e215163bf5b101deab14907c65525bebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_kilby, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:44:08 compute-0 systemd[1]: Started libpod-conmon-31c99855fffc68360ff5d5107e9f9e7e215163bf5b101deab14907c65525bebb.scope.
Nov 25 09:44:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:44:08 compute-0 podman[199109]: 2025-11-25 09:44:08.891673683 +0000 UTC m=+0.088839066 container init 31c99855fffc68360ff5d5107e9f9e7e215163bf5b101deab14907c65525bebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_kilby, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 09:44:08 compute-0 podman[199109]: 2025-11-25 09:44:08.897159721 +0000 UTC m=+0.094325103 container start 31c99855fffc68360ff5d5107e9f9e7e215163bf5b101deab14907c65525bebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_kilby, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:44:08 compute-0 podman[199109]: 2025-11-25 09:44:08.898561423 +0000 UTC m=+0.095726806 container attach 31c99855fffc68360ff5d5107e9f9e7e215163bf5b101deab14907c65525bebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_kilby, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:44:08 compute-0 elegant_kilby[199122]: 167 167
Nov 25 09:44:08 compute-0 systemd[1]: libpod-31c99855fffc68360ff5d5107e9f9e7e215163bf5b101deab14907c65525bebb.scope: Deactivated successfully.
Nov 25 09:44:08 compute-0 podman[199109]: 2025-11-25 09:44:08.901681402 +0000 UTC m=+0.098846784 container died 31c99855fffc68360ff5d5107e9f9e7e215163bf5b101deab14907c65525bebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa34de7f0ce3ec429c3db5994c66705a463df85a4dec15083dae7b48260a9c0d-merged.mount: Deactivated successfully.
Nov 25 09:44:08 compute-0 podman[199109]: 2025-11-25 09:44:08.819709907 +0000 UTC m=+0.016875310 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:44:08 compute-0 podman[199109]: 2025-11-25 09:44:08.925844935 +0000 UTC m=+0.123010318 container remove 31c99855fffc68360ff5d5107e9f9e7e215163bf5b101deab14907c65525bebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:44:08 compute-0 systemd[1]: libpod-conmon-31c99855fffc68360ff5d5107e9f9e7e215163bf5b101deab14907c65525bebb.scope: Deactivated successfully.
Nov 25 09:44:09 compute-0 podman[199144]: 2025-11-25 09:44:09.046222952 +0000 UTC m=+0.029624294 container create 1777395945b968255cc8385777017dd7a3f7a9fd192364052f2e4aefd0438522 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_newton, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:44:09 compute-0 systemd[1]: Started libpod-conmon-1777395945b968255cc8385777017dd7a3f7a9fd192364052f2e4aefd0438522.scope.
Nov 25 09:44:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646a521a49e5e5e46635b6a519a26e26659a64bedeaa961d44c0e29efe2c3325/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646a521a49e5e5e46635b6a519a26e26659a64bedeaa961d44c0e29efe2c3325/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646a521a49e5e5e46635b6a519a26e26659a64bedeaa961d44c0e29efe2c3325/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646a521a49e5e5e46635b6a519a26e26659a64bedeaa961d44c0e29efe2c3325/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646a521a49e5e5e46635b6a519a26e26659a64bedeaa961d44c0e29efe2c3325/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:09.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:09 compute-0 podman[199144]: 2025-11-25 09:44:09.109156992 +0000 UTC m=+0.092558334 container init 1777395945b968255cc8385777017dd7a3f7a9fd192364052f2e4aefd0438522 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_newton, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:44:09 compute-0 podman[199144]: 2025-11-25 09:44:09.115949122 +0000 UTC m=+0.099350463 container start 1777395945b968255cc8385777017dd7a3f7a9fd192364052f2e4aefd0438522 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_newton, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:44:09 compute-0 podman[199144]: 2025-11-25 09:44:09.117127983 +0000 UTC m=+0.100529325 container attach 1777395945b968255cc8385777017dd7a3f7a9fd192364052f2e4aefd0438522 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_newton, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:44:09 compute-0 podman[199144]: 2025-11-25 09:44:09.034766693 +0000 UTC m=+0.018168045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:44:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:09 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d1c0040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:09 compute-0 infallible_newton[199157]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:44:09 compute-0 infallible_newton[199157]: --> All data devices are unavailable
Nov 25 09:44:09 compute-0 systemd[1]: libpod-1777395945b968255cc8385777017dd7a3f7a9fd192364052f2e4aefd0438522.scope: Deactivated successfully.
Nov 25 09:44:09 compute-0 conmon[199157]: conmon 1777395945b968255cc8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1777395945b968255cc8385777017dd7a3f7a9fd192364052f2e4aefd0438522.scope/container/memory.events
Nov 25 09:44:09 compute-0 podman[199144]: 2025-11-25 09:44:09.37682062 +0000 UTC m=+0.360221961 container died 1777395945b968255cc8385777017dd7a3f7a9fd192364052f2e4aefd0438522 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-646a521a49e5e5e46635b6a519a26e26659a64bedeaa961d44c0e29efe2c3325-merged.mount: Deactivated successfully.
Nov 25 09:44:09 compute-0 podman[199144]: 2025-11-25 09:44:09.400928448 +0000 UTC m=+0.384329780 container remove 1777395945b968255cc8385777017dd7a3f7a9fd192364052f2e4aefd0438522 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_newton, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 09:44:09 compute-0 systemd[1]: libpod-conmon-1777395945b968255cc8385777017dd7a3f7a9fd192364052f2e4aefd0438522.scope: Deactivated successfully.
Nov 25 09:44:09 compute-0 sudo[199052]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:09 compute-0 sudo[199181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:44:09 compute-0 sudo[199181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:09 compute-0 sudo[199181]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:09 compute-0 sudo[199206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:44:09 compute-0 sudo[199206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:09.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:09 compute-0 podman[199263]: 2025-11-25 09:44:09.812924741 +0000 UTC m=+0.029409551 container create fbd07e0c62854b183352fa9651f706915fb584a4238a1c3b604a8d0a2e84d999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:44:09 compute-0 systemd[1]: Started libpod-conmon-fbd07e0c62854b183352fa9651f706915fb584a4238a1c3b604a8d0a2e84d999.scope.
Nov 25 09:44:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:44:09 compute-0 podman[199263]: 2025-11-25 09:44:09.864115964 +0000 UTC m=+0.080600773 container init fbd07e0c62854b183352fa9651f706915fb584a4238a1c3b604a8d0a2e84d999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mayer, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:44:09 compute-0 podman[199263]: 2025-11-25 09:44:09.86891148 +0000 UTC m=+0.085396289 container start fbd07e0c62854b183352fa9651f706915fb584a4238a1c3b604a8d0a2e84d999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mayer, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:44:09 compute-0 amazing_mayer[199276]: 167 167
Nov 25 09:44:09 compute-0 systemd[1]: libpod-fbd07e0c62854b183352fa9651f706915fb584a4238a1c3b604a8d0a2e84d999.scope: Deactivated successfully.
Nov 25 09:44:09 compute-0 podman[199263]: 2025-11-25 09:44:09.872162728 +0000 UTC m=+0.088647537 container attach fbd07e0c62854b183352fa9651f706915fb584a4238a1c3b604a8d0a2e84d999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 25 09:44:09 compute-0 podman[199263]: 2025-11-25 09:44:09.872393091 +0000 UTC m=+0.088877901 container died fbd07e0c62854b183352fa9651f706915fb584a4238a1c3b604a8d0a2e84d999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mayer, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-45d6718e1c96fcff2665acaa6bccd1139712433a840341f427f224d495734ca2-merged.mount: Deactivated successfully.
Nov 25 09:44:09 compute-0 podman[199263]: 2025-11-25 09:44:09.88975078 +0000 UTC m=+0.106235589 container remove fbd07e0c62854b183352fa9651f706915fb584a4238a1c3b604a8d0a2e84d999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:44:09 compute-0 podman[199263]: 2025-11-25 09:44:09.800825261 +0000 UTC m=+0.017310080 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:44:09 compute-0 systemd[1]: libpod-conmon-fbd07e0c62854b183352fa9651f706915fb584a4238a1c3b604a8d0a2e84d999.scope: Deactivated successfully.
Nov 25 09:44:10 compute-0 podman[199299]: 2025-11-25 09:44:10.007589722 +0000 UTC m=+0.027962964 container create 895ec7a7e8f0482cfe792f2eedbef4d9033a3767657bdafe79c22210882de3dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:44:10 compute-0 systemd[1]: Started libpod-conmon-895ec7a7e8f0482cfe792f2eedbef4d9033a3767657bdafe79c22210882de3dd.scope.
Nov 25 09:44:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3303996c819abab9aeb5f6c563d902521512ae3bfa9bd65f4dc4fed314a745/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3303996c819abab9aeb5f6c563d902521512ae3bfa9bd65f4dc4fed314a745/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3303996c819abab9aeb5f6c563d902521512ae3bfa9bd65f4dc4fed314a745/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3303996c819abab9aeb5f6c563d902521512ae3bfa9bd65f4dc4fed314a745/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:10 compute-0 podman[199299]: 2025-11-25 09:44:10.06974781 +0000 UTC m=+0.090121053 container init 895ec7a7e8f0482cfe792f2eedbef4d9033a3767657bdafe79c22210882de3dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_dewdney, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:44:10 compute-0 podman[199299]: 2025-11-25 09:44:10.074950854 +0000 UTC m=+0.095324097 container start 895ec7a7e8f0482cfe792f2eedbef4d9033a3767657bdafe79c22210882de3dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:44:10 compute-0 podman[199299]: 2025-11-25 09:44:10.076162328 +0000 UTC m=+0.096535590 container attach 895ec7a7e8f0482cfe792f2eedbef4d9033a3767657bdafe79c22210882de3dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_dewdney, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 09:44:10 compute-0 podman[199299]: 2025-11-25 09:44:09.996482832 +0000 UTC m=+0.016856075 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:44:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:10 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c2540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:44:10] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:44:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:44:10] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]: {
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:     "1": [
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:         {
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             "devices": [
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "/dev/loop3"
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             ],
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             "lv_name": "ceph_lv0",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             "lv_size": "21470642176",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             "name": "ceph_lv0",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             "tags": {
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.cluster_name": "ceph",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.crush_device_class": "",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.encrypted": "0",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.osd_id": "1",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.type": "block",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.vdo": "0",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:                 "ceph.with_tpm": "0"
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             },
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             "type": "block",
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:             "vg_name": "ceph_vg0"
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:         }
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]:     ]
Nov 25 09:44:10 compute-0 laughing_dewdney[199312]: }
Nov 25 09:44:10 compute-0 systemd[1]: libpod-895ec7a7e8f0482cfe792f2eedbef4d9033a3767657bdafe79c22210882de3dd.scope: Deactivated successfully.
Nov 25 09:44:10 compute-0 podman[199321]: 2025-11-25 09:44:10.336800787 +0000 UTC m=+0.016807542 container died 895ec7a7e8f0482cfe792f2eedbef4d9033a3767657bdafe79c22210882de3dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c3303996c819abab9aeb5f6c563d902521512ae3bfa9bd65f4dc4fed314a745-merged.mount: Deactivated successfully.
Nov 25 09:44:10 compute-0 podman[199321]: 2025-11-25 09:44:10.359826025 +0000 UTC m=+0.039832771 container remove 895ec7a7e8f0482cfe792f2eedbef4d9033a3767657bdafe79c22210882de3dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_dewdney, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:44:10 compute-0 systemd[1]: libpod-conmon-895ec7a7e8f0482cfe792f2eedbef4d9033a3767657bdafe79c22210882de3dd.scope: Deactivated successfully.
Nov 25 09:44:10 compute-0 sudo[199206]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:10 compute-0 sudo[199333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:44:10 compute-0 sudo[199333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:10 compute-0 sudo[199333]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:10 compute-0 sudo[199358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:44:10 compute-0 sudo[199358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:10 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:10 compute-0 ceph-mon[74207]: pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:10 compute-0 podman[199415]: 2025-11-25 09:44:10.772094482 +0000 UTC m=+0.027334400 container create aa0f2eb6cf8a5e225f2dc1be666024b17f50e6c9b2831f291be798fc717988a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_haslett, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 09:44:10 compute-0 systemd[1]: Started libpod-conmon-aa0f2eb6cf8a5e225f2dc1be666024b17f50e6c9b2831f291be798fc717988a6.scope.
Nov 25 09:44:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:44:10 compute-0 podman[199415]: 2025-11-25 09:44:10.836187957 +0000 UTC m=+0.091427896 container init aa0f2eb6cf8a5e225f2dc1be666024b17f50e6c9b2831f291be798fc717988a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:44:10 compute-0 podman[199415]: 2025-11-25 09:44:10.840368285 +0000 UTC m=+0.095608203 container start aa0f2eb6cf8a5e225f2dc1be666024b17f50e6c9b2831f291be798fc717988a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:44:10 compute-0 podman[199415]: 2025-11-25 09:44:10.842336984 +0000 UTC m=+0.097576902 container attach aa0f2eb6cf8a5e225f2dc1be666024b17f50e6c9b2831f291be798fc717988a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_haslett, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:44:10 compute-0 awesome_haslett[199428]: 167 167
Nov 25 09:44:10 compute-0 systemd[1]: libpod-aa0f2eb6cf8a5e225f2dc1be666024b17f50e6c9b2831f291be798fc717988a6.scope: Deactivated successfully.
Nov 25 09:44:10 compute-0 conmon[199428]: conmon aa0f2eb6cf8a5e225f2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa0f2eb6cf8a5e225f2dc1be666024b17f50e6c9b2831f291be798fc717988a6.scope/container/memory.events
Nov 25 09:44:10 compute-0 podman[199415]: 2025-11-25 09:44:10.844524666 +0000 UTC m=+0.099764584 container died aa0f2eb6cf8a5e225f2dc1be666024b17f50e6c9b2831f291be798fc717988a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_haslett, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-02a800a0cc4cd39e5106ef14693f578980f0ce61b1444e7d8d2bbc7c71af175a-merged.mount: Deactivated successfully.
Nov 25 09:44:10 compute-0 podman[199415]: 2025-11-25 09:44:10.760559606 +0000 UTC m=+0.015799524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:44:10 compute-0 podman[199415]: 2025-11-25 09:44:10.86115213 +0000 UTC m=+0.116392047 container remove aa0f2eb6cf8a5e225f2dc1be666024b17f50e6c9b2831f291be798fc717988a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_haslett, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:44:10 compute-0 systemd[1]: libpod-conmon-aa0f2eb6cf8a5e225f2dc1be666024b17f50e6c9b2831f291be798fc717988a6.scope: Deactivated successfully.
Nov 25 09:44:10 compute-0 podman[199451]: 2025-11-25 09:44:10.978004933 +0000 UTC m=+0.027644385 container create 2c37e1b466fd9d26dd44da4dc17714acc8f964316e4d78da33ec4534ecf55440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_meninsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:44:11 compute-0 systemd[1]: Started libpod-conmon-2c37e1b466fd9d26dd44da4dc17714acc8f964316e4d78da33ec4534ecf55440.scope.
Nov 25 09:44:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a0bf60dc0f2b991fa91db5eb205cb437f900f4d24ebb7fa8301a30c844427db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a0bf60dc0f2b991fa91db5eb205cb437f900f4d24ebb7fa8301a30c844427db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a0bf60dc0f2b991fa91db5eb205cb437f900f4d24ebb7fa8301a30c844427db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a0bf60dc0f2b991fa91db5eb205cb437f900f4d24ebb7fa8301a30c844427db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:44:11 compute-0 podman[199451]: 2025-11-25 09:44:11.0333274 +0000 UTC m=+0.082966872 container init 2c37e1b466fd9d26dd44da4dc17714acc8f964316e4d78da33ec4534ecf55440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:44:11 compute-0 podman[199451]: 2025-11-25 09:44:11.038387035 +0000 UTC m=+0.088026475 container start 2c37e1b466fd9d26dd44da4dc17714acc8f964316e4d78da33ec4534ecf55440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:44:11 compute-0 podman[199451]: 2025-11-25 09:44:11.03967397 +0000 UTC m=+0.089313411 container attach 2c37e1b466fd9d26dd44da4dc17714acc8f964316e4d78da33ec4534ecf55440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_meninsky, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:44:11 compute-0 podman[199451]: 2025-11-25 09:44:10.967392475 +0000 UTC m=+0.017031946 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:44:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:44:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:11.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:44:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:11 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:11 compute-0 lvm[199540]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:44:11 compute-0 lvm[199540]: VG ceph_vg0 finished
Nov 25 09:44:11 compute-0 wonderful_meninsky[199464]: {}
Nov 25 09:44:11 compute-0 systemd[1]: libpod-2c37e1b466fd9d26dd44da4dc17714acc8f964316e4d78da33ec4534ecf55440.scope: Deactivated successfully.
Nov 25 09:44:11 compute-0 podman[199451]: 2025-11-25 09:44:11.535603575 +0000 UTC m=+0.585243026 container died 2c37e1b466fd9d26dd44da4dc17714acc8f964316e4d78da33ec4534ecf55440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 09:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a0bf60dc0f2b991fa91db5eb205cb437f900f4d24ebb7fa8301a30c844427db-merged.mount: Deactivated successfully.
Nov 25 09:44:11 compute-0 podman[199451]: 2025-11-25 09:44:11.557703238 +0000 UTC m=+0.607342689 container remove 2c37e1b466fd9d26dd44da4dc17714acc8f964316e4d78da33ec4534ecf55440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:44:11 compute-0 systemd[1]: libpod-conmon-2c37e1b466fd9d26dd44da4dc17714acc8f964316e4d78da33ec4534ecf55440.scope: Deactivated successfully.
Nov 25 09:44:11 compute-0 sudo[199358]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:44:11 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:44:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:44:11 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:44:11 compute-0 sudo[199551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:44:11 compute-0 sudo[199551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:11 compute-0 sudo[199551]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:11.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:12 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d1c0040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:12 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c2560 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:12 compute-0 ceph-mon[74207]: pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:12 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:44:12 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:44:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:13.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:13 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c2560 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:44:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:13.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:44:13 compute-0 podman[199580]: 2025-11-25 09:44:13.989886151 +0000 UTC m=+0.054352021 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:44:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:14 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:14 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:14 compute-0 ceph-mon[74207]: pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:44:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:44:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:44:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:44:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:44:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:44:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:44:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:44:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:15.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:15 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c2560 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:44:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:15.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:16 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c2560 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:16 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:16 compute-0 ceph-mon[74207]: pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:16.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:16.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:16.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:16.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:17.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:17 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:17 compute-0 sudo[199730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukqotvozunckuticjwimdfnokxenmfgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063857.1455147-968-68962292091289/AnsiballZ_systemd.py'
Nov 25 09:44:17 compute-0 sudo[199730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:17.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:17 compute-0 python3.9[199732]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:44:17 compute-0 systemd[1]: Reloading.
Nov 25 09:44:17 compute-0 systemd-rc-local-generator[199761]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:17 compute-0 systemd-sysv-generator[199764]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:18 compute-0 sudo[199730]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:18 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c2560 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:18 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:18 compute-0 sudo[199921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqcqqkgtieetkdvvqjketvkcfxrhiijf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063858.3942447-968-168616094500956/AnsiballZ_systemd.py'
Nov 25 09:44:18 compute-0 sudo[199921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:18 compute-0 ceph-mon[74207]: pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:18 compute-0 python3.9[199923]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:44:18 compute-0 systemd[1]: Reloading.
Nov 25 09:44:18 compute-0 systemd-rc-local-generator[199948]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:18 compute-0 systemd-sysv-generator[199952]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:19.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:19 compute-0 sudo[199921]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:19 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:19 compute-0 sudo[200111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wswfmuansbhzxrjmascfmmqxgjhslfmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063859.2218695-968-184412971501341/AnsiballZ_systemd.py'
Nov 25 09:44:19 compute-0 sudo[200111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:19 compute-0 python3.9[200113]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:44:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:19.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:19 compute-0 systemd[1]: Reloading.
Nov 25 09:44:19 compute-0 systemd-sysv-generator[200140]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:19 compute-0 systemd-rc-local-generator[200137]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:19 compute-0 sudo[200111]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:20 compute-0 sudo[200253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:20 compute-0 sudo[200253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:20 compute-0 sudo[200253]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:20 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d1c005150 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:20 compute-0 sudo[200328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vswkoepaqsfmhovgbnwqjihfpdgvekfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063860.0449543-968-53158706009186/AnsiballZ_systemd.py'
Nov 25 09:44:20 compute-0 sudo[200328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:44:20] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Nov 25 09:44:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:44:20] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Nov 25 09:44:20 compute-0 python3.9[200330]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:44:20 compute-0 systemd[1]: Reloading.
Nov 25 09:44:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:20 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d1c005150 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:20 compute-0 systemd-rc-local-generator[200352]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:20 compute-0 systemd-sysv-generator[200355]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:20 compute-0 ceph-mon[74207]: pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:20 compute-0 sudo[200328]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:21.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:21 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:21 compute-0 sudo[200517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlhmjlpfoacpgdonglmrmmxkmtzlyvas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063861.080667-1055-31237850117710/AnsiballZ_systemd.py'
Nov 25 09:44:21 compute-0 sudo[200517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:44:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:21.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:44:21 compute-0 python3.9[200519]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:21 compute-0 systemd[1]: Reloading.
Nov 25 09:44:21 compute-0 systemd-sysv-generator[200548]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:21 compute-0 systemd-rc-local-generator[200545]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:22 compute-0 sudo[200517]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:22 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:22 compute-0 sudo[200708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gctkztmoaikfyisxdhahjjusltxozsgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063862.150586-1055-102926873666632/AnsiballZ_systemd.py'
Nov 25 09:44:22 compute-0 sudo[200708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:22 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d1c005150 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:22 compute-0 python3.9[200710]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:22 compute-0 systemd[1]: Reloading.
Nov 25 09:44:22 compute-0 ceph-mon[74207]: pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:22 compute-0 systemd-rc-local-generator[200738]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:22 compute-0 systemd-sysv-generator[200741]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:22 compute-0 sudo[200708]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:23.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:23 compute-0 sudo[200898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bodvtvlwalgqfhlvziemeplvqfvceteu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063862.9889798-1055-236986972709371/AnsiballZ_systemd.py'
Nov 25 09:44:23 compute-0 sudo[200898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:23 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d1c005150 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:23 compute-0 python3.9[200900]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:23 compute-0 systemd[1]: Reloading.
Nov 25 09:44:23 compute-0 systemd-sysv-generator[200928]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:23 compute-0 systemd-rc-local-generator[200924]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:23.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:23 compute-0 sudo[200898]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:24 compute-0 sudo[201090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npvkwyumsaoulwpnvokihgysicddetnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063863.8106503-1055-77923879020038/AnsiballZ_systemd.py'
Nov 25 09:44:24 compute-0 sudo[201090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:24 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:24 compute-0 python3.9[201092]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:24 compute-0 sudo[201090]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:24 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:24 compute-0 sudo[201245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiytqjkrmoaahxqqadpfszycmukmgcak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063864.4592478-1055-230842987738863/AnsiballZ_systemd.py'
Nov 25 09:44:24 compute-0 sudo[201245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:24 compute-0 ceph-mon[74207]: pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:24 compute-0 python3.9[201247]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:24 compute-0 systemd[1]: Reloading.
Nov 25 09:44:24 compute-0 systemd-sysv-generator[201281]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:25 compute-0 systemd-rc-local-generator[201278]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:25.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:25 compute-0 sudo[201245]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:25 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d040c25e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:25.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:25 compute-0 sudo[201436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqsljwknkzlxentkazmvxlrgkkwoifup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063865.4945033-1163-252946739397052/AnsiballZ_systemd.py'
Nov 25 09:44:25 compute-0 sudo[201436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:25 compute-0 python3.9[201438]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 09:44:25 compute-0 systemd[1]: Reloading.
Nov 25 09:44:26 compute-0 systemd-rc-local-generator[201463]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:44:26 compute-0 systemd-sysv-generator[201466]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:44:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:26 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d1c005150 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:26 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 25 09:44:26 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 25 09:44:26 compute-0 sudo[201436]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:26 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:26 compute-0 sudo[201632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmttauuxhieokcgcghydpbmlqkcdzwqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063866.47675-1187-61701917048543/AnsiballZ_systemd.py'
Nov 25 09:44:26 compute-0 sudo[201632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:26 compute-0 ceph-mon[74207]: pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:26 compute-0 python3.9[201634]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:26 compute-0 sudo[201632]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:26.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:26.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:26.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:26.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:27.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:27 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:27 compute-0 sudo[201787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xibpczxldgaogradovivrmdylkslcjvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063867.0650923-1187-44872413706147/AnsiballZ_systemd.py'
Nov 25 09:44:27 compute-0 sudo[201787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:27.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:27 compute-0 python3.9[201789]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:27 compute-0 sudo[201787]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:28 compute-0 sudo[201944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzgwnkygqcdeojfyocebrutjixzviunr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063867.8581135-1187-54525589226729/AnsiballZ_systemd.py'
Nov 25 09:44:28 compute-0 sudo[201944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:28 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400c770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:28 compute-0 python3.9[201946]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:28 compute-0 sudo[201944]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:28 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0001e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:28 compute-0 auditd[671]: Audit daemon rotating log files
Nov 25 09:44:28 compute-0 sudo[202099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrcjwmwnakevmqbgqwrrxgqvexrgmoye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063868.4391189-1187-31962795869588/AnsiballZ_systemd.py'
Nov 25 09:44:28 compute-0 sudo[202099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:28 compute-0 ceph-mon[74207]: pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:28 compute-0 python3.9[202101]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:28 compute-0 sudo[202099]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:29.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:29 compute-0 sudo[202254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edscyyibepytlwmlxioazjndplpnwhxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063869.0004227-1187-149838019287240/AnsiballZ_systemd.py'
Nov 25 09:44:29 compute-0 sudo[202254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:29 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:29 compute-0 python3.9[202256]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:29 compute-0 sudo[202254]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:29.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:29 compute-0 sudo[202410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvpfzsoolwhgvialtlhhkgdcwsugzgir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063869.5736096-1187-276713047272455/AnsiballZ_systemd.py'
Nov 25 09:44:29 compute-0 sudo[202410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:44:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:44:30 compute-0 python3.9[202412]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:30 compute-0 sudo[202410]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:30 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:44:30] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:44:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:44:30] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:44:30 compute-0 sudo[202568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urqwphmqsmnvfemryrnttcqvydtflsrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063870.1522064-1187-170032227570725/AnsiballZ_systemd.py'
Nov 25 09:44:30 compute-0 sudo[202568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:30 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:30 compute-0 ceph-mon[74207]: pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:44:30 compute-0 python3.9[202570]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:30 compute-0 sudo[202568]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:31 compute-0 sudo[202723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcdzetbwrrcrlsiknhhpsgnqicavjyyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063870.9025018-1187-76641937130063/AnsiballZ_systemd.py'
Nov 25 09:44:31 compute-0 sudo[202723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:31.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:31 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400c770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:31 compute-0 python3.9[202725]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:31 compute-0 sudo[202723]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:31.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:31 compute-0 sudo[202879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkddubvasnwkaqjidlsynybtwgfmzppq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063871.5117905-1187-16502752670341/AnsiballZ_systemd.py'
Nov 25 09:44:31 compute-0 sudo[202879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:31 compute-0 python3.9[202881]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:32 compute-0 sudo[202879]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:32 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d200a7a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:32 compute-0 sudo[203035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzwzbvcigmworqhkbiqwlfquujalkulh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063872.0936377-1187-222271493840375/AnsiballZ_systemd.py'
Nov 25 09:44:32 compute-0 sudo[203035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:32 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:32 compute-0 python3.9[203037]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:32 compute-0 sudo[203035]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:32 compute-0 ceph-mon[74207]: pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:32 compute-0 sudo[203190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diqmbcxinkzbgvxqzduedlxokfaimnpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063872.703347-1187-10696856342022/AnsiballZ_systemd.py'
Nov 25 09:44:32 compute-0 sudo[203190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:33.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:33 compute-0 python3.9[203192]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:33 compute-0 sudo[203190]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:33 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:33 compute-0 sudo[203345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgetvanlgwgpcdmcwfhybmttimbzbakt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063873.2918684-1187-203608720755180/AnsiballZ_systemd.py'
Nov 25 09:44:33 compute-0 sudo[203345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:33.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:33 compute-0 python3.9[203347]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:33 compute-0 sudo[203345]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:34 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400c770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:34 compute-0 sudo[203502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfquqaquztcaddepiwluitzbfmdsjuef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063874.061523-1187-71143217095751/AnsiballZ_systemd.py'
Nov 25 09:44:34 compute-0 sudo[203502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:34 compute-0 python3.9[203504]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:34 compute-0 sudo[203502]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:34 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d200a7a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:34 compute-0 ceph-mon[74207]: pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:34 compute-0 sudo[203657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwcjndicgaebecklhhvbvthoucjftjbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063874.6215508-1187-47967564733926/AnsiballZ_systemd.py'
Nov 25 09:44:34 compute-0 sudo[203657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:35 compute-0 python3.9[203659]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 09:44:35 compute-0 sudo[203657]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:35.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:35 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d200a7a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:35.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:35 compute-0 sudo[203813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjyggsgmsihyyarypaoqiaublykqqgsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063875.6325488-1493-57037947216638/AnsiballZ_file.py'
Nov 25 09:44:35 compute-0 sudo[203813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:35 compute-0 python3.9[203815]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:44:36 compute-0 sudo[203813]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:36 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:36 compute-0 sudo[203974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxacxaicenbgmqoawlnkntsqejkyvxct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063876.1081936-1493-4717553676007/AnsiballZ_file.py'
Nov 25 09:44:36 compute-0 sudo[203974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:36 compute-0 podman[203940]: 2025-11-25 09:44:36.315686582 +0000 UTC m=+0.045244660 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 09:44:36 compute-0 python3.9[203982]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:44:36 compute-0 sudo[203974]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:36 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:36 compute-0 ceph-mon[74207]: pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:36 compute-0 sudo[204134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrkfziaukpfneopsxgabpdebmrafwqto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063876.715318-1493-152478505333911/AnsiballZ_file.py'
Nov 25 09:44:36 compute-0 sudo[204134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:36.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:37.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:37.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:37.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:37 compute-0 python3.9[204136]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:44:37 compute-0 sudo[204134]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:37.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:37 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:37 compute-0 sudo[204286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbsiaiwnflbehhndkbovhdqlhcuryetk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063877.200062-1493-276882668402027/AnsiballZ_file.py'
Nov 25 09:44:37 compute-0 sudo[204286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:37 compute-0 python3.9[204288]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:44:37 compute-0 sudo[204286]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:37.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:37 compute-0 sudo[204439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiyxyykmphponydcmmywzqdpowkbmyqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063877.6540866-1493-258605094602617/AnsiballZ_file.py'
Nov 25 09:44:37 compute-0 sudo[204439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:38 compute-0 python3.9[204441]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:44:38 compute-0 sudo[204439]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:38 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d200a7a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:38 compute-0 sudo[204592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjyulpegahxxdyjigijwlsryrqnzwuag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063878.1405392-1493-100045536727572/AnsiballZ_file.py'
Nov 25 09:44:38 compute-0 sudo[204592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:38 compute-0 python3.9[204594]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:44:38 compute-0 sudo[204592]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:38 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14006870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:38 compute-0 ceph-mon[74207]: pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:38 compute-0 sudo[204744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxytifqliucqxyqlcfxlvykqlvvvcdao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063878.636701-1622-134702772284863/AnsiballZ_stat.py'
Nov 25 09:44:38 compute-0 sudo[204744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:39 compute-0 python3.9[204746]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:39 compute-0 sudo[204744]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:39.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:39 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce400c770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:39 compute-0 sudo[204869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmuhbkrolxwrocmgpomrnutjqpczmhlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063878.636701-1622-134702772284863/AnsiballZ_copy.py'
Nov 25 09:44:39 compute-0 sudo[204869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:39 compute-0 python3.9[204871]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063878.636701-1622-134702772284863/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:39 compute-0 sudo[204869]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:39.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:39 compute-0 sudo[205023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsxreastwbljiiwllxyhlauelppljlug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063879.7546558-1622-79488281082340/AnsiballZ_stat.py'
Nov 25 09:44:39 compute-0 sudo[205023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:40 compute-0 python3.9[205025]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:40 compute-0 sudo[205023]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:40 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:40 compute-0 sudo[205075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:44:40 compute-0 sudo[205075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:44:40 compute-0 sudo[205075]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:44:40] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:44:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:44:40] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:44:40 compute-0 sudo[205173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlezblwefgitbgbnwpdlddyiizyrsxmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063879.7546558-1622-79488281082340/AnsiballZ_copy.py'
Nov 25 09:44:40 compute-0 sudo[205173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:40 compute-0 python3.9[205175]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063879.7546558-1622-79488281082340/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:40 compute-0 sudo[205173]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:40 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:40 compute-0 ceph-mon[74207]: pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:40 compute-0 sudo[205325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzoohadqcjzzzoczfhlzaemsrkmmvxry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063880.608528-1622-76404873599224/AnsiballZ_stat.py'
Nov 25 09:44:40 compute-0 sudo[205325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:40 compute-0 python3.9[205327]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:40 compute-0 sudo[205325]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:41.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:41 compute-0 sudo[205450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-einpyghxekevdbvcxiogcedrhxphfgzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063880.608528-1622-76404873599224/AnsiballZ_copy.py'
Nov 25 09:44:41 compute-0 sudo[205450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:41 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2800f6e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:41 compute-0 python3.9[205452]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063880.608528-1622-76404873599224/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:41 compute-0 sudo[205450]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:41 compute-0 sudo[205602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbdwenomxdtyagszhkhfppvlyxjppzwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063881.4816215-1622-89339957884064/AnsiballZ_stat.py'
Nov 25 09:44:41 compute-0 sudo[205602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:41.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:41 compute-0 python3.9[205605]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:41 compute-0 sudo[205602]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:42 compute-0 sudo[205729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwvkakgzhokxrjaljqsvcpshsayaurms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063881.4816215-1622-89339957884064/AnsiballZ_copy.py'
Nov 25 09:44:42 compute-0 sudo[205729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:42 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d200a7a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:42 compute-0 python3.9[205731]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063881.4816215-1622-89339957884064/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:42 compute-0 sudo[205729]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:42 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14007d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:42 compute-0 sudo[205881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgdscdyuhuleowbfwkfvmtmzgxqczxbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063882.4022205-1622-73430307986850/AnsiballZ_stat.py'
Nov 25 09:44:42 compute-0 sudo[205881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:42 compute-0 ceph-mon[74207]: pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:42 compute-0 python3.9[205883]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:42 compute-0 sudo[205881]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:43 compute-0 sudo[206006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpxdnnesbvpccepsgmefjtznqcqslnay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063882.4022205-1622-73430307986850/AnsiballZ_copy.py'
Nov 25 09:44:43 compute-0 sudo[206006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:43.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:43 compute-0 python3.9[206008]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063882.4022205-1622-73430307986850/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:43 compute-0 sudo[206006]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:43 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:43 compute-0 sudo[206158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbyihbhmauohjaefxkvbejacdewzldld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063883.3649232-1622-185820741811980/AnsiballZ_stat.py'
Nov 25 09:44:43 compute-0 sudo[206158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:43.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:43 compute-0 python3.9[206160]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:43 compute-0 sudo[206158]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:44 compute-0 sudo[206285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqbpdyzxtmjfhsjwqbgaqtzxkadgiqbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063883.3649232-1622-185820741811980/AnsiballZ_copy.py'
Nov 25 09:44:44 compute-0 sudo[206285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:44 compute-0 podman[206287]: 2025-11-25 09:44:44.070751197 +0000 UTC m=+0.059437414 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 25 09:44:44 compute-0 python3.9[206288]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063883.3649232-1622-185820741811980/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:44 compute-0 sudo[206285]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:44 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2800f6e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:44 compute-0 sudo[206460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uceoxrejjazpzucgtuuqiepxxpwjhhyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063884.2846017-1622-218387163169470/AnsiballZ_stat.py'
Nov 25 09:44:44 compute-0 sudo[206460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:44 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d200a7a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:44 compute-0 python3.9[206462]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:44 compute-0 sudo[206460]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:44 compute-0 ceph-mon[74207]: pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:44 compute-0 sudo[206583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnjoyvvvbzhawkwlzbmcscxnztmijdad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063884.2846017-1622-218387163169470/AnsiballZ_copy.py'
Nov 25 09:44:44 compute-0 sudo[206583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:44:44
Nov 25 09:44:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:44:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:44:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.nfs', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'volumes']
Nov 25 09:44:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:44:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:44:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:44:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:44:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:44:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:44:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:44:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:44:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:44:45 compute-0 python3.9[206585]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063884.2846017-1622-218387163169470/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:44:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:44:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:44:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:44:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:44:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:44:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:44:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:44:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:44:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:44:45 compute-0 sudo[206583]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:45.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:45 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14007d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:45 compute-0 sudo[206735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbhzkdsaymmvpjrkiwgtlwteulvljnjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063885.1288688-1622-222154194830088/AnsiballZ_stat.py'
Nov 25 09:44:45 compute-0 sudo[206735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:45 compute-0 python3.9[206737]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:45 compute-0 sudo[206735]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:45.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:45 compute-0 sudo[206861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlxincsrqktdwhgtvdiduiltcmfzxveu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063885.1288688-1622-222154194830088/AnsiballZ_copy.py'
Nov 25 09:44:45 compute-0 sudo[206861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:44:45 compute-0 python3.9[206863]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063885.1288688-1622-222154194830088/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:45 compute-0 sudo[206861]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:46 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:46 compute-0 sudo[207014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpmfubkoywfxronfcbcucyboulogevpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063886.1164446-1961-224432017914389/AnsiballZ_command.py'
Nov 25 09:44:46 compute-0 sudo[207014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:46 compute-0 python3.9[207016]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 25 09:44:46 compute-0 sudo[207014]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:46 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2800f6e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:46 compute-0 ceph-mon[74207]: pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:46 compute-0 sudo[207167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmywwcanxawwicpsidzjgqogdobntgmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063886.713758-1988-51733020433538/AnsiballZ_file.py'
Nov 25 09:44:46 compute-0 sudo[207167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:46 compute-0 ceph-mgr[74476]: [devicehealth INFO root] Check health
Nov 25 09:44:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:46.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:46.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:46.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:46.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:47 compute-0 python3.9[207169]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:47 compute-0 sudo[207167]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:47.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:47 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d200a7a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:47 compute-0 sudo[207319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qptdjqlihkjeafxoqqhaqugpoidtrrwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063887.1552577-1988-5919967669381/AnsiballZ_file.py'
Nov 25 09:44:47 compute-0 sudo[207319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:47 compute-0 python3.9[207321]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:47 compute-0 sudo[207319]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:47.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:47 compute-0 sudo[207472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkqhrvfgbxaxvgzhhojjnfhxvbfcarim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063887.6462324-1988-205599895018161/AnsiballZ_file.py'
Nov 25 09:44:47 compute-0 sudo[207472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:47 compute-0 python3.9[207474]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:47 compute-0 sudo[207472]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:48 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14007d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:48 compute-0 sudo[207625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjooznwqehlfemwijeismxcbxfhlsvul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063888.0923588-1988-100439720573939/AnsiballZ_file.py'
Nov 25 09:44:48 compute-0 sudo[207625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:48 compute-0 python3.9[207627]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:48 compute-0 sudo[207625]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:48 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:48 compute-0 sudo[207778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udigcvpprowxqqhmqzbmwjhopaebfonl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063888.537676-1988-85392033884423/AnsiballZ_file.py'
Nov 25 09:44:48 compute-0 sudo[207778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:48 compute-0 ceph-mon[74207]: pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:48 compute-0 python3.9[207780]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:48 compute-0 sudo[207778]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:49.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:49 compute-0 sudo[207930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwitrdfqcqumvmiwoogutudniveifeft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063889.02705-1988-56507118508199/AnsiballZ_file.py'
Nov 25 09:44:49 compute-0 sudo[207930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:49 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2800f6e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:49 compute-0 python3.9[207932]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:49 compute-0 sudo[207930]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:49 compute-0 sudo[208083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruxhgfnmkseaakxgtlgsykleveodaepe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063889.5197914-1988-44335845763311/AnsiballZ_file.py'
Nov 25 09:44:49 compute-0 sudo[208083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:49.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:49 compute-0 python3.9[208085]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:49 compute-0 sudo[208083]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:50 compute-0 sudo[208236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cflyzsloiutyfyjfynnvodzidyvcdaaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063889.9537547-1988-133859813707337/AnsiballZ_file.py'
Nov 25 09:44:50 compute-0 sudo[208236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:50 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2800f6e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:44:50] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Nov 25 09:44:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:44:50] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Nov 25 09:44:50 compute-0 python3.9[208238]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:50 compute-0 sudo[208236]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:50 compute-0 sudo[208388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmledchezzsjmbyhbmpwyehzyrlhunov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063890.39178-1988-84190765468012/AnsiballZ_file.py'
Nov 25 09:44:50 compute-0 sudo[208388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:50 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14007d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:50 compute-0 python3.9[208390]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:50 compute-0 sudo[208388]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:50 compute-0 ceph-mon[74207]: pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:51 compute-0 sudo[208540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtflvhafdnvgtjgidzcqqhivvlfzsioq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063890.8271945-1988-47252535442077/AnsiballZ_file.py'
Nov 25 09:44:51 compute-0 sudo[208540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:51.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:51 compute-0 python3.9[208542]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:51 compute-0 sudo[208540]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:51 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:51 compute-0 sudo[208692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aveveiqhsxifyiobwhqxkdylhapmcupq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063891.2699842-1988-190862346553273/AnsiballZ_file.py'
Nov 25 09:44:51 compute-0 sudo[208692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:51 compute-0 python3.9[208694]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:51 compute-0 sudo[208692]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:51.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:51 compute-0 sudo[208846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmecugalghrmjnajnmdkydjakobpotqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063891.7113955-1988-215972063511610/AnsiballZ_file.py'
Nov 25 09:44:51 compute-0 sudo[208846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:52 compute-0 python3.9[208848]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:52 compute-0 sudo[208846]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:52 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2800f6e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:52 compute-0 sudo[208998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouztnobvwfquzqicgbryebdicazmpgpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063892.2597947-1988-46659859263504/AnsiballZ_file.py'
Nov 25 09:44:52 compute-0 sudo[208998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:52 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d200a7a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:52 compute-0 python3.9[209000]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:52 compute-0 sudo[208998]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:52 compute-0 ceph-mon[74207]: pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:44:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:52 compute-0 sudo[209150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vllwuigbsysxinbdmxfqqisyhzalitdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063892.7180848-1988-182303689879551/AnsiballZ_file.py'
Nov 25 09:44:52 compute-0 sudo[209150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:53 compute-0 python3.9[209152]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:53 compute-0 sudo[209150]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:53.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:53 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14007d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:53 compute-0 sudo[209302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfqrikoygpivixsiiqzqaarfigrprvpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063893.4824984-2285-244795371447863/AnsiballZ_stat.py'
Nov 25 09:44:53 compute-0 sudo[209302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:44:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:53.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:44:53 compute-0 python3.9[209304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:53 compute-0 sudo[209302]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:54 compute-0 sudo[209427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbluipyiiijuluqnlhttlssxjsdtenwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063893.4824984-2285-244795371447863/AnsiballZ_copy.py'
Nov 25 09:44:54 compute-0 sudo[209427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:54 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14007d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:54 compute-0 python3.9[209429]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063893.4824984-2285-244795371447863/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:54 compute-0 sudo[209427]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:54 compute-0 sudo[209579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tahadmgqtzicozpvisncfrhbgjgvoyet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063894.3508487-2285-251367036127335/AnsiballZ_stat.py'
Nov 25 09:44:54 compute-0 sudo[209579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:54 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2801b760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:54 compute-0 python3.9[209581]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:54 compute-0 sudo[209579]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:54 compute-0 ceph-mon[74207]: pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:54 compute-0 sudo[209702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpjovnwvmwlxqyqqcazhpgzwikgchyro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063894.3508487-2285-251367036127335/AnsiballZ_copy.py'
Nov 25 09:44:54 compute-0 sudo[209702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:55 compute-0 python3.9[209704]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063894.3508487-2285-251367036127335/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:55 compute-0 sudo[209702]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:55.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:55 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d200a7a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:44:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:44:55 compute-0 sudo[209854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucusuhnajibwtkyhfkysxsnegfcsxynh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063895.3307948-2285-139634287252556/AnsiballZ_stat.py'
Nov 25 09:44:55 compute-0 sudo[209854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:55 compute-0 python3.9[209856]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:55 compute-0 sudo[209854]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:55.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:55 compute-0 ceph-mon[74207]: pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:55 compute-0 sudo[209979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbtqspepexmldhlkxyuehgaswragoohu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063895.3307948-2285-139634287252556/AnsiballZ_copy.py'
Nov 25 09:44:55 compute-0 sudo[209979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:56 compute-0 python3.9[209981]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063895.3307948-2285-139634287252556/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:56 compute-0 sudo[209979]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:56 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:56 compute-0 sudo[210131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omoieqiylpseqeheamvkqkkysrhvnamk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063896.2196286-2285-193394068707082/AnsiballZ_stat.py'
Nov 25 09:44:56 compute-0 sudo[210131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:56 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14007d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:56 compute-0 python3.9[210133]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:56 compute-0 sudo[210131]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:56 compute-0 sudo[210254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huphjypplteplimjconaevohkaeufvci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063896.2196286-2285-193394068707082/AnsiballZ_copy.py'
Nov 25 09:44:56 compute-0 sudo[210254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:56.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:56 compute-0 python3.9[210256]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063896.2196286-2285-193394068707082/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:56.995Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:56.995Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:44:56.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:44:57 compute-0 sudo[210254]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:57.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:57 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2801b760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:57 compute-0 sudo[210406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzohneitfivsijkwgquvurqkquglobin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063897.1242535-2285-220563041551261/AnsiballZ_stat.py'
Nov 25 09:44:57 compute-0 sudo[210406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:57 compute-0 python3.9[210408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:57 compute-0 sudo[210406]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:57.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:57 compute-0 sudo[210530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zehldosyepmvzfyyxbdfvrinxkyhqgel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063897.1242535-2285-220563041551261/AnsiballZ_copy.py'
Nov 25 09:44:57 compute-0 sudo[210530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:44:57 compute-0 python3.9[210532]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063897.1242535-2285-220563041551261/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:57 compute-0 sudo[210530]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:58 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d200a7a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:58 compute-0 ceph-mon[74207]: pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:44:58 compute-0 sudo[210683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgxjdsacdfcsuafejcpnhriigexxtfip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063898.0855062-2285-134529506630742/AnsiballZ_stat.py'
Nov 25 09:44:58 compute-0 sudo[210683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:58 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:58 compute-0 python3.9[210685]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:58 compute-0 sudo[210683]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:58 compute-0 sudo[210806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lowpzmfwrjqhyxmbfwhgfobppuubaeoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063898.0855062-2285-134529506630742/AnsiballZ_copy.py'
Nov 25 09:44:58 compute-0 sudo[210806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:59 compute-0 python3.9[210808]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063898.0855062-2285-134529506630742/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:44:59 compute-0 sudo[210806]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:59.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:44:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:44:59 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14007d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:44:59 compute-0 sudo[210958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrmusbwmzvfoarxsnprbaklkcmnhzfrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063899.2268808-2285-271786031603043/AnsiballZ_stat.py'
Nov 25 09:44:59 compute-0 sudo[210958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:59 compute-0 python3.9[210960]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:44:59 compute-0 sudo[210958]: pam_unix(sudo:session): session closed for user root
Nov 25 09:44:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:44:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:44:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:59.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:44:59 compute-0 sudo[211083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgsufyvfifcpsdvmnirjefcyklubqnwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063899.2268808-2285-271786031603043/AnsiballZ_copy.py'
Nov 25 09:44:59 compute-0 sudo[211083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:44:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:44:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:45:00 compute-0 python3.9[211085]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063899.2268808-2285-271786031603043/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:00 compute-0 sudo[211083]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:45:00 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2801b900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:45:00] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:45:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:45:00] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:45:00 compute-0 sudo[211133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:00 compute-0 sudo[211133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:00 compute-0 sudo[211133]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:00 compute-0 ceph-mon[74207]: pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:45:00 compute-0 sudo[211260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pynbanaihbiqibfsiieuacewynivwbbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063900.244257-2285-188919651228006/AnsiballZ_stat.py'
Nov 25 09:45:00 compute-0 sudo[211260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:45:00 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2801b900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:00 compute-0 python3.9[211262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:00 compute-0 sudo[211260]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:00 compute-0 sudo[211383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltcdoxkifygiwvwwpsqtootfbmqqjlon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063900.244257-2285-188919651228006/AnsiballZ_copy.py'
Nov 25 09:45:00 compute-0 sudo[211383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:01 compute-0 python3.9[211385]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063900.244257-2285-188919651228006/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:01 compute-0 sudo[211383]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:01.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:45:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:45:01 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:01 compute-0 sudo[211536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxazenljjvsloqaogvpvwnuueogcwzeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063901.3418977-2285-165493442948625/AnsiballZ_stat.py'
Nov 25 09:45:01 compute-0 sudo[211536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:01.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:01 compute-0 python3.9[211538]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:01 compute-0 sudo[211536]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:02 compute-0 sudo[211661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfemljldrqclighyhaoerrvzbmvtrpko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063901.3418977-2285-165493442948625/AnsiballZ_copy.py'
Nov 25 09:45:02 compute-0 sudo[211661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:02 compute-0 python3.9[211663]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063901.3418977-2285-165493442948625/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:02 compute-0 sudo[211661]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:45:02 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14007d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:02 compute-0 ceph-mon[74207]: pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:45:02 compute-0 sudo[211813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jebvwoshzmecckxyxhzszxrpabmzcglj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063902.3119755-2285-22859621443630/AnsiballZ_stat.py'
Nov 25 09:45:02 compute-0 sudo[211813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:45:02 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14007d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:02 compute-0 python3.9[211815]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:02 compute-0 sudo[211813]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:02 compute-0 sudo[211936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlofonamrjcudxxpakagvdqrqegoeiit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063902.3119755-2285-22859621443630/AnsiballZ_copy.py'
Nov 25 09:45:02 compute-0 sudo[211936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:03 compute-0 python3.9[211938]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063902.3119755-2285-22859621443630/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:03 compute-0 sudo[211936]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:03.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:45:03 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d2801b900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:03 compute-0 sudo[212088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnbcbusdduoyfgzppamjwpsxxglzdlct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063903.2506263-2285-72342141432397/AnsiballZ_stat.py'
Nov 25 09:45:03 compute-0 sudo[212088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:03 compute-0 python3.9[212090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:03 compute-0 sudo[212088]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:03.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:03 compute-0 sudo[212213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keewivrsjvgcjpgcuubsakokyiugfbpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063903.2506263-2285-72342141432397/AnsiballZ_copy.py'
Nov 25 09:45:03 compute-0 sudo[212213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:04 compute-0 python3.9[212215]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063903.2506263-2285-72342141432397/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:04 compute-0 sudo[212213]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:45:04 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d10005da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:04 compute-0 ceph-mon[74207]: pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:04 compute-0 sudo[212365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmdkyzgaesfuplfpfvclnovebwjvntxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063904.3163595-2285-76864827885195/AnsiballZ_stat.py'
Nov 25 09:45:04 compute-0 sudo[212365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:45:04 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ce0001e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:04 compute-0 python3.9[212367]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:04 compute-0 sudo[212365]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:04 compute-0 sudo[212488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbpdwpnvgmhmlvkyotccjbrmqailygor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063904.3163595-2285-76864827885195/AnsiballZ_copy.py'
Nov 25 09:45:04 compute-0 sudo[212488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:05 compute-0 python3.9[212490]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063904.3163595-2285-76864827885195/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:05 compute-0 sudo[212488]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:05.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:05 compute-0 kernel: ganesha.nfsd[177212]: segfault at 50 ip 00007f3d933e532e sp 00007f3d627fb210 error 4 in libntirpc.so.5.8[7f3d933ca000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 25 09:45:05 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:45:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[164959]: 25/11/2025 09:45:05 : epoch 692579f0 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3d14007d60 fd 38 proxy ignored for local
Nov 25 09:45:05 compute-0 systemd[1]: Started Process Core Dump (PID 212590/UID 0).
Nov 25 09:45:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:45:05.373 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:45:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:45:05.373 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:45:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:45:05.373 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:45:05 compute-0 sudo[212642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txmeasaxaamaebddscujenrkstvwsico ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063905.198505-2285-96649535934695/AnsiballZ_stat.py'
Nov 25 09:45:05 compute-0 sudo[212642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:05 compute-0 python3.9[212644]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:05 compute-0 sudo[212642]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:05.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:05 compute-0 sudo[212766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtkemhntwccyldlbkmbebhxikwrqtgbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063905.198505-2285-96649535934695/AnsiballZ_copy.py'
Nov 25 09:45:05 compute-0 sudo[212766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:05 compute-0 python3.9[212768]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063905.198505-2285-96649535934695/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:05 compute-0 sudo[212766]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:06 compute-0 sudo[212919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miaukjmcfwcsbgmjnnvlpklulmjveeju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063906.0903194-2285-41756024603283/AnsiballZ_stat.py'
Nov 25 09:45:06 compute-0 sudo[212919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:06 compute-0 ceph-mon[74207]: pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:06 compute-0 systemd-coredump[212605]: Process 164963 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 60:
                                                    #0  0x00007f3d933e532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:45:06 compute-0 python3.9[212921]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:06 compute-0 sudo[212919]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:06 compute-0 systemd[1]: systemd-coredump@4-212590-0.service: Deactivated successfully.
Nov 25 09:45:06 compute-0 systemd[1]: systemd-coredump@4-212590-0.service: Consumed 1.068s CPU time.
Nov 25 09:45:06 compute-0 podman[212927]: 2025-11-25 09:45:06.509312843 +0000 UTC m=+0.036797345 container died 2e11d8cadc4f0221377ded1dacfd486197db235defde3f188dca2feab655fa47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d0d1a5518cb52e3d3fa45065936145ac82534e8879d62d53263e9ef43e4207d-merged.mount: Deactivated successfully.
Nov 25 09:45:06 compute-0 podman[212926]: 2025-11-25 09:45:06.526571187 +0000 UTC m=+0.052003693 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 09:45:06 compute-0 podman[212927]: 2025-11-25 09:45:06.531142342 +0000 UTC m=+0.058626846 container remove 2e11d8cadc4f0221377ded1dacfd486197db235defde3f188dca2feab655fa47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 09:45:06 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:45:06 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:45:06 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Consumed 1.182s CPU time.
Nov 25 09:45:06 compute-0 sudo[213094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhqgqcmohjrybuakatxlheidxjhflszg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063906.0903194-2285-41756024603283/AnsiballZ_copy.py'
Nov 25 09:45:06 compute-0 sudo[213094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:06 compute-0 python3.9[213096]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063906.0903194-2285-41756024603283/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:06 compute-0 sudo[213094]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:06.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:06.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:06.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:06.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:07.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:45:07 compute-0 python3.9[213246]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:07.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:08 compute-0 sudo[213401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qklwttuglitlxxtpqccxtpptssmpjjyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063907.7989728-2903-126673414014263/AnsiballZ_seboolean.py'
Nov 25 09:45:08 compute-0 sudo[213401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:08 compute-0 python3.9[213403]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 25 09:45:08 compute-0 ceph-mon[74207]: pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:45:09 compute-0 sudo[213401]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:09.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:09 compute-0 sudo[213557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqkfoetrjpnqucszouwcdxccqxydeqbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063909.3590071-2927-5566932928515/AnsiballZ_copy.py'
Nov 25 09:45:09 compute-0 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 25 09:45:09 compute-0 sudo[213557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:09.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:09 compute-0 python3.9[213559]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:09 compute-0 sudo[213557]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:10 compute-0 sudo[213711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nengcktuwmgxzydccdqxewsltiipqgvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063909.8615775-2927-110865572488803/AnsiballZ_copy.py'
Nov 25 09:45:10 compute-0 sudo[213711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:10 compute-0 python3.9[213713]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:10 compute-0 sudo[213711]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:45:10] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:45:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:45:10] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:45:10 compute-0 ceph-mon[74207]: pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:10 compute-0 sudo[213863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cammjkfeugtzapvqfbherqccmbpnqhgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063910.4313326-2927-17369139295917/AnsiballZ_copy.py'
Nov 25 09:45:10 compute-0 sudo[213863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:10 compute-0 python3.9[213865]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:10 compute-0 sudo[213863]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:11 compute-0 sudo[214015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxvpzbmdaioiucnoxixoucfbsmrtwhoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063910.90792-2927-253108958288057/AnsiballZ_copy.py'
Nov 25 09:45:11 compute-0 sudo[214015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:11.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:11 compute-0 python3.9[214017]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:11 compute-0 sudo[214015]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:45:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094511 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:45:11 compute-0 sudo[214167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywospemqunmtybjjwxnpukarmcrexyqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063911.3762336-2927-200701352089470/AnsiballZ_copy.py'
Nov 25 09:45:11 compute-0 sudo[214167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:11 compute-0 python3.9[214169]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:11.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:11 compute-0 sudo[214167]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:11 compute-0 sudo[214195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:45:11 compute-0 sudo[214195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:11 compute-0 sudo[214195]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:11 compute-0 sudo[214220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:45:11 compute-0 sudo[214220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:12 compute-0 sudo[214383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amgihjjfyunxmzazjnymqepmcjkncptv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063911.9227216-3035-96636012080018/AnsiballZ_copy.py'
Nov 25 09:45:12 compute-0 sudo[214383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:12 compute-0 python3.9[214385]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:12 compute-0 sudo[214383]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:12 compute-0 sudo[214220]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 25 09:45:12 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:45:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 25 09:45:12 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:45:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 25 09:45:12 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:45:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:45:12 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:45:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:45:12 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:45:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:45:12 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:45:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:45:12 compute-0 ceph-mon[74207]: pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:45:12 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:45:12 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:45:12 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:45:12 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:45:12 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:45:12 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:45:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:45:12 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:45:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:45:12 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:45:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:45:12 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:45:12 compute-0 sudo[214462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:45:12 compute-0 sudo[214462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:12 compute-0 sudo[214462]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:12 compute-0 sudo[214509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:45:12 compute-0 sudo[214509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:12 compute-0 sudo[214602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifhostfbebwidudnhidluqugpblloeae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063912.3928611-3035-91189923900481/AnsiballZ_copy.py'
Nov 25 09:45:12 compute-0 sudo[214602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:12 compute-0 python3.9[214604]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:12 compute-0 sudo[214602]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:12 compute-0 podman[214636]: 2025-11-25 09:45:12.792153154 +0000 UTC m=+0.029386907 container create 6429e5952110dcd2af4ea1557462cfbbc9b8a04e58b6e471e6b9414f6b01832b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_germain, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 09:45:12 compute-0 systemd[1]: Started libpod-conmon-6429e5952110dcd2af4ea1557462cfbbc9b8a04e58b6e471e6b9414f6b01832b.scope.
Nov 25 09:45:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:45:12 compute-0 podman[214636]: 2025-11-25 09:45:12.851853502 +0000 UTC m=+0.089087276 container init 6429e5952110dcd2af4ea1557462cfbbc9b8a04e58b6e471e6b9414f6b01832b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_germain, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:45:12 compute-0 podman[214636]: 2025-11-25 09:45:12.856294094 +0000 UTC m=+0.093527848 container start 6429e5952110dcd2af4ea1557462cfbbc9b8a04e58b6e471e6b9414f6b01832b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_germain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Nov 25 09:45:12 compute-0 podman[214636]: 2025-11-25 09:45:12.857434273 +0000 UTC m=+0.094668027 container attach 6429e5952110dcd2af4ea1557462cfbbc9b8a04e58b6e471e6b9414f6b01832b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:45:12 compute-0 optimistic_germain[214666]: 167 167
Nov 25 09:45:12 compute-0 systemd[1]: libpod-6429e5952110dcd2af4ea1557462cfbbc9b8a04e58b6e471e6b9414f6b01832b.scope: Deactivated successfully.
Nov 25 09:45:12 compute-0 podman[214636]: 2025-11-25 09:45:12.862522645 +0000 UTC m=+0.099756399 container died 6429e5952110dcd2af4ea1557462cfbbc9b8a04e58b6e471e6b9414f6b01832b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_germain, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:45:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0c9445ec0a3dcfbe7ff30780769d5610e1506e9ad3aea569e8ead55b31daa81-merged.mount: Deactivated successfully.
Nov 25 09:45:12 compute-0 podman[214636]: 2025-11-25 09:45:12.779333457 +0000 UTC m=+0.016567232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:45:12 compute-0 podman[214636]: 2025-11-25 09:45:12.88863046 +0000 UTC m=+0.125864224 container remove 6429e5952110dcd2af4ea1557462cfbbc9b8a04e58b6e471e6b9414f6b01832b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_germain, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 09:45:12 compute-0 systemd[1]: libpod-conmon-6429e5952110dcd2af4ea1557462cfbbc9b8a04e58b6e471e6b9414f6b01832b.scope: Deactivated successfully.
Nov 25 09:45:13 compute-0 podman[214751]: 2025-11-25 09:45:13.011125401 +0000 UTC m=+0.026966725 container create 0b47576191dc6e89163e7ba3ccf9e09b58956be2f8da1aec1328063e1a85a88b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_brattain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:45:13 compute-0 systemd[1]: Started libpod-conmon-0b47576191dc6e89163e7ba3ccf9e09b58956be2f8da1aec1328063e1a85a88b.scope.
Nov 25 09:45:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be060dfbdf22a8269c06ea2ddf591eb70f8995008605d6e21d9ccdec08eb82ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be060dfbdf22a8269c06ea2ddf591eb70f8995008605d6e21d9ccdec08eb82ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be060dfbdf22a8269c06ea2ddf591eb70f8995008605d6e21d9ccdec08eb82ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be060dfbdf22a8269c06ea2ddf591eb70f8995008605d6e21d9ccdec08eb82ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be060dfbdf22a8269c06ea2ddf591eb70f8995008605d6e21d9ccdec08eb82ab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:13 compute-0 podman[214751]: 2025-11-25 09:45:13.06580691 +0000 UTC m=+0.081648264 container init 0b47576191dc6e89163e7ba3ccf9e09b58956be2f8da1aec1328063e1a85a88b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_brattain, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:45:13 compute-0 podman[214751]: 2025-11-25 09:45:13.070729068 +0000 UTC m=+0.086570392 container start 0b47576191dc6e89163e7ba3ccf9e09b58956be2f8da1aec1328063e1a85a88b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 25 09:45:13 compute-0 podman[214751]: 2025-11-25 09:45:13.073516343 +0000 UTC m=+0.089357666 container attach 0b47576191dc6e89163e7ba3ccf9e09b58956be2f8da1aec1328063e1a85a88b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:45:13 compute-0 podman[214751]: 2025-11-25 09:45:13.000051496 +0000 UTC m=+0.015892820 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:45:13 compute-0 sudo[214838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cirvglybhajmfdtdchsubfhwhufutedw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063912.914537-3035-228294896425622/AnsiballZ_copy.py'
Nov 25 09:45:13 compute-0 sudo[214838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:13.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:45:13 compute-0 python3.9[214840]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:13 compute-0 vigilant_brattain[214796]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:45:13 compute-0 vigilant_brattain[214796]: --> All data devices are unavailable
Nov 25 09:45:13 compute-0 sudo[214838]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:13 compute-0 systemd[1]: libpod-0b47576191dc6e89163e7ba3ccf9e09b58956be2f8da1aec1328063e1a85a88b.scope: Deactivated successfully.
Nov 25 09:45:13 compute-0 podman[214751]: 2025-11-25 09:45:13.335103342 +0000 UTC m=+0.350944666 container died 0b47576191dc6e89163e7ba3ccf9e09b58956be2f8da1aec1328063e1a85a88b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 09:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-be060dfbdf22a8269c06ea2ddf591eb70f8995008605d6e21d9ccdec08eb82ab-merged.mount: Deactivated successfully.
Nov 25 09:45:13 compute-0 podman[214751]: 2025-11-25 09:45:13.359865921 +0000 UTC m=+0.375707245 container remove 0b47576191dc6e89163e7ba3ccf9e09b58956be2f8da1aec1328063e1a85a88b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:45:13 compute-0 systemd[1]: libpod-conmon-0b47576191dc6e89163e7ba3ccf9e09b58956be2f8da1aec1328063e1a85a88b.scope: Deactivated successfully.
Nov 25 09:45:13 compute-0 sudo[214509]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:13 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:45:13 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:45:13 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:45:13 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:45:13 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:45:13 compute-0 sudo[214887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:45:13 compute-0 sudo[214887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:13 compute-0 sudo[214887]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:13 compute-0 sudo[214936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:45:13 compute-0 sudo[214936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:13 compute-0 sudo[215059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzichhtqvhcpskaejlxujaacrtybkmzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063913.4396605-3035-179018921005456/AnsiballZ_copy.py'
Nov 25 09:45:13 compute-0 sudo[215059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:13.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:13 compute-0 python3.9[215061]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:13 compute-0 podman[215094]: 2025-11-25 09:45:13.799144402 +0000 UTC m=+0.027275957 container create 9fe61b53aa807fa7c2d17dd16ae16e9c8fa1963f13fb936f0f96cded6302fc04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:45:13 compute-0 sudo[215059]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:13 compute-0 systemd[1]: Started libpod-conmon-9fe61b53aa807fa7c2d17dd16ae16e9c8fa1963f13fb936f0f96cded6302fc04.scope.
Nov 25 09:45:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:45:13 compute-0 podman[215094]: 2025-11-25 09:45:13.849537385 +0000 UTC m=+0.077668962 container init 9fe61b53aa807fa7c2d17dd16ae16e9c8fa1963f13fb936f0f96cded6302fc04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:45:13 compute-0 podman[215094]: 2025-11-25 09:45:13.853952438 +0000 UTC m=+0.082083994 container start 9fe61b53aa807fa7c2d17dd16ae16e9c8fa1963f13fb936f0f96cded6302fc04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mcclintock, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 25 09:45:13 compute-0 podman[215094]: 2025-11-25 09:45:13.855111854 +0000 UTC m=+0.083243431 container attach 9fe61b53aa807fa7c2d17dd16ae16e9c8fa1963f13fb936f0f96cded6302fc04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mcclintock, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:45:13 compute-0 eloquent_mcclintock[215111]: 167 167
Nov 25 09:45:13 compute-0 systemd[1]: libpod-9fe61b53aa807fa7c2d17dd16ae16e9c8fa1963f13fb936f0f96cded6302fc04.scope: Deactivated successfully.
Nov 25 09:45:13 compute-0 conmon[215111]: conmon 9fe61b53aa807fa7c2d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9fe61b53aa807fa7c2d17dd16ae16e9c8fa1963f13fb936f0f96cded6302fc04.scope/container/memory.events
Nov 25 09:45:13 compute-0 podman[215136]: 2025-11-25 09:45:13.88784048 +0000 UTC m=+0.019136625 container died 9fe61b53aa807fa7c2d17dd16ae16e9c8fa1963f13fb936f0f96cded6302fc04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:45:13 compute-0 podman[215094]: 2025-11-25 09:45:13.78857604 +0000 UTC m=+0.016707616 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb4e229027e8d559d1b0913a616678fbb7c02d5f20baddc3156193aad4856a61-merged.mount: Deactivated successfully.
Nov 25 09:45:13 compute-0 podman[215136]: 2025-11-25 09:45:13.909646486 +0000 UTC m=+0.040942611 container remove 9fe61b53aa807fa7c2d17dd16ae16e9c8fa1963f13fb936f0f96cded6302fc04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:45:13 compute-0 systemd[1]: libpod-conmon-9fe61b53aa807fa7c2d17dd16ae16e9c8fa1963f13fb936f0f96cded6302fc04.scope: Deactivated successfully.
Nov 25 09:45:14 compute-0 podman[215231]: 2025-11-25 09:45:14.034813685 +0000 UTC m=+0.029636808 container create df6265330622d3ebb3157b6008803c336bb596a13320d54df2d6508cd0f55c4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:45:14 compute-0 systemd[1]: Started libpod-conmon-df6265330622d3ebb3157b6008803c336bb596a13320d54df2d6508cd0f55c4d.scope.
Nov 25 09:45:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d05644ce06bb514a24b8a45dfa6d266c12e019589aae456855baf4356c33603/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d05644ce06bb514a24b8a45dfa6d266c12e019589aae456855baf4356c33603/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d05644ce06bb514a24b8a45dfa6d266c12e019589aae456855baf4356c33603/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d05644ce06bb514a24b8a45dfa6d266c12e019589aae456855baf4356c33603/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:14 compute-0 podman[215231]: 2025-11-25 09:45:14.087413008 +0000 UTC m=+0.082236131 container init df6265330622d3ebb3157b6008803c336bb596a13320d54df2d6508cd0f55c4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:45:14 compute-0 podman[215231]: 2025-11-25 09:45:14.095198504 +0000 UTC m=+0.090021628 container start df6265330622d3ebb3157b6008803c336bb596a13320d54df2d6508cd0f55c4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lumiere, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:45:14 compute-0 podman[215231]: 2025-11-25 09:45:14.096344073 +0000 UTC m=+0.091167197 container attach df6265330622d3ebb3157b6008803c336bb596a13320d54df2d6508cd0f55c4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:45:14 compute-0 podman[215231]: 2025-11-25 09:45:14.023755238 +0000 UTC m=+0.018578382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:45:14 compute-0 sudo[215317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhkumxajrjttlfqewaxxvohdrtjqiokf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063913.9117374-3035-216439633637785/AnsiballZ_copy.py'
Nov 25 09:45:14 compute-0 sudo[215317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:14 compute-0 podman[215272]: 2025-11-25 09:45:14.156857184 +0000 UTC m=+0.066074447 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 09:45:14 compute-0 python3.9[215325]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:14 compute-0 musing_lumiere[215268]: {
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:     "1": [
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:         {
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             "devices": [
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "/dev/loop3"
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             ],
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             "lv_name": "ceph_lv0",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             "lv_size": "21470642176",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             "name": "ceph_lv0",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             "tags": {
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.cluster_name": "ceph",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.crush_device_class": "",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.encrypted": "0",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.osd_id": "1",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.type": "block",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.vdo": "0",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:                 "ceph.with_tpm": "0"
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             },
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             "type": "block",
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:             "vg_name": "ceph_vg0"
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:         }
Nov 25 09:45:14 compute-0 musing_lumiere[215268]:     ]
Nov 25 09:45:14 compute-0 musing_lumiere[215268]: }
Nov 25 09:45:14 compute-0 sudo[215317]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:14 compute-0 systemd[1]: libpod-df6265330622d3ebb3157b6008803c336bb596a13320d54df2d6508cd0f55c4d.scope: Deactivated successfully.
Nov 25 09:45:14 compute-0 podman[215231]: 2025-11-25 09:45:14.353657518 +0000 UTC m=+0.348480641 container died df6265330622d3ebb3157b6008803c336bb596a13320d54df2d6508cd0f55c4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lumiere, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d05644ce06bb514a24b8a45dfa6d266c12e019589aae456855baf4356c33603-merged.mount: Deactivated successfully.
Nov 25 09:45:14 compute-0 podman[215231]: 2025-11-25 09:45:14.376486031 +0000 UTC m=+0.371309154 container remove df6265330622d3ebb3157b6008803c336bb596a13320d54df2d6508cd0f55c4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lumiere, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 09:45:14 compute-0 systemd[1]: libpod-conmon-df6265330622d3ebb3157b6008803c336bb596a13320d54df2d6508cd0f55c4d.scope: Deactivated successfully.
Nov 25 09:45:14 compute-0 ceph-mon[74207]: pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:45:14 compute-0 sudo[214936]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:14 compute-0 sudo[215364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:45:14 compute-0 sudo[215364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:14 compute-0 sudo[215364]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:14 compute-0 sudo[215389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:45:14 compute-0 sudo[215389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:14 compute-0 sudo[215577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojlyrwxmiaqpsvccniplpgdxonuabucq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063914.5939193-3143-244879860518086/AnsiballZ_systemd.py'
Nov 25 09:45:14 compute-0 sudo[215577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:14 compute-0 podman[215552]: 2025-11-25 09:45:14.806563056 +0000 UTC m=+0.032375400 container create d8d10020dc087f3be2a71ca960bdfc05ea1f47f975fbdbf7d499dda9ac6eb8ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:45:14 compute-0 systemd[1]: Started libpod-conmon-d8d10020dc087f3be2a71ca960bdfc05ea1f47f975fbdbf7d499dda9ac6eb8ad.scope.
Nov 25 09:45:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:45:14 compute-0 podman[215552]: 2025-11-25 09:45:14.858443163 +0000 UTC m=+0.084255529 container init d8d10020dc087f3be2a71ca960bdfc05ea1f47f975fbdbf7d499dda9ac6eb8ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cerf, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:45:14 compute-0 podman[215552]: 2025-11-25 09:45:14.863408122 +0000 UTC m=+0.089220467 container start d8d10020dc087f3be2a71ca960bdfc05ea1f47f975fbdbf7d499dda9ac6eb8ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 25 09:45:14 compute-0 podman[215552]: 2025-11-25 09:45:14.864778246 +0000 UTC m=+0.090590591 container attach d8d10020dc087f3be2a71ca960bdfc05ea1f47f975fbdbf7d499dda9ac6eb8ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:45:14 compute-0 kind_cerf[215586]: 167 167
Nov 25 09:45:14 compute-0 systemd[1]: libpod-d8d10020dc087f3be2a71ca960bdfc05ea1f47f975fbdbf7d499dda9ac6eb8ad.scope: Deactivated successfully.
Nov 25 09:45:14 compute-0 podman[215552]: 2025-11-25 09:45:14.867583313 +0000 UTC m=+0.093395679 container died d8d10020dc087f3be2a71ca960bdfc05ea1f47f975fbdbf7d499dda9ac6eb8ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cerf, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-baeaab34e2f51234776bfd41b80a2616dc0b49096777c3b0523fdaf6d8d09a34-merged.mount: Deactivated successfully.
Nov 25 09:45:14 compute-0 podman[215552]: 2025-11-25 09:45:14.887969782 +0000 UTC m=+0.113782128 container remove d8d10020dc087f3be2a71ca960bdfc05ea1f47f975fbdbf7d499dda9ac6eb8ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:45:14 compute-0 podman[215552]: 2025-11-25 09:45:14.793447902 +0000 UTC m=+0.019260267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:45:14 compute-0 systemd[1]: libpod-conmon-d8d10020dc087f3be2a71ca960bdfc05ea1f47f975fbdbf7d499dda9ac6eb8ad.scope: Deactivated successfully.
Nov 25 09:45:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:45:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:45:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:45:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:45:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:45:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:45:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:45:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:45:15 compute-0 podman[215607]: 2025-11-25 09:45:15.007701015 +0000 UTC m=+0.027271899 container create bc1e1b88891e2dfc7c0ae6135eeb5d329949ed4bb599e7d36533955c7835e9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:45:15 compute-0 systemd[1]: Started libpod-conmon-bc1e1b88891e2dfc7c0ae6135eeb5d329949ed4bb599e7d36533955c7835e9f3.scope.
Nov 25 09:45:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb3e3fd44e3ab91a77d230008dc23409a62e76dfbcbe1bb8704451852a47e3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb3e3fd44e3ab91a77d230008dc23409a62e76dfbcbe1bb8704451852a47e3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb3e3fd44e3ab91a77d230008dc23409a62e76dfbcbe1bb8704451852a47e3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb3e3fd44e3ab91a77d230008dc23409a62e76dfbcbe1bb8704451852a47e3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:15 compute-0 python3.9[215582]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:45:15 compute-0 podman[215607]: 2025-11-25 09:45:15.072036011 +0000 UTC m=+0.091606896 container init bc1e1b88891e2dfc7c0ae6135eeb5d329949ed4bb599e7d36533955c7835e9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ellis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:45:15 compute-0 systemd[1]: Reloading.
Nov 25 09:45:15 compute-0 podman[215607]: 2025-11-25 09:45:15.077004537 +0000 UTC m=+0.096575411 container start bc1e1b88891e2dfc7c0ae6135eeb5d329949ed4bb599e7d36533955c7835e9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ellis, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:45:15 compute-0 podman[215607]: 2025-11-25 09:45:15.078494365 +0000 UTC m=+0.098065250 container attach bc1e1b88891e2dfc7c0ae6135eeb5d329949ed4bb599e7d36533955c7835e9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:45:15 compute-0 podman[215607]: 2025-11-25 09:45:14.996604757 +0000 UTC m=+0.016175662 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:45:15 compute-0 systemd-sysv-generator[215647]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:15 compute-0 systemd-rc-local-generator[215644]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:15.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:45:15 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 25 09:45:15 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 25 09:45:15 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 25 09:45:15 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 25 09:45:15 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 25 09:45:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:45:15 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 25 09:45:15 compute-0 sudo[215577]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:15 compute-0 exciting_ellis[215620]: {}
Nov 25 09:45:15 compute-0 lvm[215785]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:45:15 compute-0 lvm[215785]: VG ceph_vg0 finished
Nov 25 09:45:15 compute-0 systemd[1]: libpod-bc1e1b88891e2dfc7c0ae6135eeb5d329949ed4bb599e7d36533955c7835e9f3.scope: Deactivated successfully.
Nov 25 09:45:15 compute-0 podman[215607]: 2025-11-25 09:45:15.633467909 +0000 UTC m=+0.653038804 container died bc1e1b88891e2dfc7c0ae6135eeb5d329949ed4bb599e7d36533955c7835e9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ellis, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:45:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-abb3e3fd44e3ab91a77d230008dc23409a62e76dfbcbe1bb8704451852a47e3b-merged.mount: Deactivated successfully.
Nov 25 09:45:15 compute-0 podman[215607]: 2025-11-25 09:45:15.656335476 +0000 UTC m=+0.675906361 container remove bc1e1b88891e2dfc7c0ae6135eeb5d329949ed4bb599e7d36533955c7835e9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:45:15 compute-0 systemd[1]: libpod-conmon-bc1e1b88891e2dfc7c0ae6135eeb5d329949ed4bb599e7d36533955c7835e9f3.scope: Deactivated successfully.
Nov 25 09:45:15 compute-0 sudo[215389]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:45:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:45:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:45:15 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:45:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:15.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:15 compute-0 sudo[215849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:45:15 compute-0 sudo[215849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:15 compute-0 sudo[215849]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:15 compute-0 sudo[215924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyaknrxzdzwcrpvweesrpniuxzigboak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063915.5978975-3143-157472947899311/AnsiballZ_systemd.py'
Nov 25 09:45:15 compute-0 sudo[215924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:16 compute-0 python3.9[215926]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:45:16 compute-0 systemd[1]: Reloading.
Nov 25 09:45:16 compute-0 systemd-rc-local-generator[215950]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:16 compute-0 systemd-sysv-generator[215954]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:16 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 25 09:45:16 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 25 09:45:16 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 25 09:45:16 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 25 09:45:16 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 25 09:45:16 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 25 09:45:16 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 09:45:16 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 25 09:45:16 compute-0 sudo[215924]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:16 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 5.
Nov 25 09:45:16 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:45:16 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Consumed 1.182s CPU time.
Nov 25 09:45:16 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:45:16 compute-0 sudo[216151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nizeqylkqbykljygllstkuctligaxkih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063916.4892273-3143-149529920941131/AnsiballZ_systemd.py'
Nov 25 09:45:16 compute-0 sudo[216151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:16 compute-0 ceph-mon[74207]: pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:45:16 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:45:16 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:45:16 compute-0 podman[216181]: 2025-11-25 09:45:16.82236922 +0000 UTC m=+0.028505616 container create 6e7e3969d8809a42c5f7fed33a41c74167526f2a14054dd7904f01cc28242822 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Nov 25 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a969d191129d9317e8ca85d5012fa114bc2c13c5e34dde5a2e71c222d21f81/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a969d191129d9317e8ca85d5012fa114bc2c13c5e34dde5a2e71c222d21f81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a969d191129d9317e8ca85d5012fa114bc2c13c5e34dde5a2e71c222d21f81/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a969d191129d9317e8ca85d5012fa114bc2c13c5e34dde5a2e71c222d21f81/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:45:16 compute-0 podman[216181]: 2025-11-25 09:45:16.868790337 +0000 UTC m=+0.074926753 container init 6e7e3969d8809a42c5f7fed33a41c74167526f2a14054dd7904f01cc28242822 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:45:16 compute-0 podman[216181]: 2025-11-25 09:45:16.872943737 +0000 UTC m=+0.079080133 container start 6e7e3969d8809a42c5f7fed33a41c74167526f2a14054dd7904f01cc28242822 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 09:45:16 compute-0 bash[216181]: 6e7e3969d8809a42c5f7fed33a41c74167526f2a14054dd7904f01cc28242822
Nov 25 09:45:16 compute-0 podman[216181]: 2025-11-25 09:45:16.809924842 +0000 UTC m=+0.016061248 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:45:16 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:45:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:16 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:45:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:16 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:45:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:16 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:45:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:16 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:45:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:16 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:45:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:16 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:45:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:16 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:45:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:16 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:45:16 compute-0 python3.9[216161]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:45:16 compute-0 systemd[1]: Reloading.
Nov 25 09:45:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:16.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:16.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:16.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:16.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:17 compute-0 systemd-sysv-generator[216262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:17 compute-0 systemd-rc-local-generator[216256]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:17.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:17 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 25 09:45:17 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 25 09:45:17 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 25 09:45:17 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 25 09:45:17 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 25 09:45:17 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 25 09:45:17 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 25 09:45:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:17 compute-0 sudo[216151]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:17 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 25 09:45:17 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 25 09:45:17 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 25 09:45:17 compute-0 sudo[216450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijhhwnaeekkvjbkccldxowmxdclgobek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063917.3811736-3143-252100869769982/AnsiballZ_systemd.py'
Nov 25 09:45:17 compute-0 sudo[216450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:17.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:17 compute-0 python3.9[216453]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:45:17 compute-0 systemd[1]: Reloading.
Nov 25 09:45:17 compute-0 systemd-rc-local-generator[216478]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:17 compute-0 systemd-sysv-generator[216482]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:18 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 25 09:45:18 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 25 09:45:18 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 25 09:45:18 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 25 09:45:18 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 25 09:45:18 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 25 09:45:18 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 25 09:45:18 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 25 09:45:18 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 25 09:45:18 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 25 09:45:18 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 09:45:18 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 25 09:45:18 compute-0 sudo[216450]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:18 compute-0 setroubleshoot[216271]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ac188d8a-bed5-4550-a2d2-80c86ed0102f
Nov 25 09:45:18 compute-0 setroubleshoot[216271]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 25 09:45:18 compute-0 setroubleshoot[216271]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ac188d8a-bed5-4550-a2d2-80c86ed0102f
Nov 25 09:45:18 compute-0 setroubleshoot[216271]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 25 09:45:18 compute-0 sudo[216671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfkhohogiywzalztarszldfepknicywq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063918.3222919-3143-136135812631100/AnsiballZ_systemd.py'
Nov 25 09:45:18 compute-0 sudo[216671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:18 compute-0 ceph-mon[74207]: pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:18 compute-0 python3.9[216673]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:45:18 compute-0 systemd[1]: Reloading.
Nov 25 09:45:18 compute-0 systemd-sysv-generator[216696]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:18 compute-0 systemd-rc-local-generator[216693]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:19 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 25 09:45:19 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 25 09:45:19 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 25 09:45:19 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 25 09:45:19 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 25 09:45:19 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 25 09:45:19 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 25 09:45:19 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 25 09:45:19 compute-0 sudo[216671]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:19.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:45:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:19.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:19 compute-0 sudo[216885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyhfhrtygovbwykltpnnjcniyzuvldnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063919.6914945-3254-49822851758504/AnsiballZ_file.py'
Nov 25 09:45:19 compute-0 sudo[216885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:20 compute-0 python3.9[216887]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:20 compute-0 sudo[216885]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:45:20] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 25 09:45:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:45:20] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 25 09:45:20 compute-0 sudo[216935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:20 compute-0 sudo[216935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:20 compute-0 sudo[216935]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:20 compute-0 sudo[217062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkijjwrrutrzonmstswozwfekregzjvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063920.2946544-3278-1719999531376/AnsiballZ_find.py'
Nov 25 09:45:20 compute-0 sudo[217062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:20 compute-0 python3.9[217064]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 09:45:20 compute-0 sudo[217062]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:20 compute-0 ceph-mon[74207]: pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:45:21 compute-0 sudo[217214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yayzsxepsfrdpwpqsjunvrujrdmosest ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063920.851151-3302-245999046748484/AnsiballZ_command.py'
Nov 25 09:45:21 compute-0 sudo[217214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:21.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:21 compute-0 python3.9[217216]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:21 compute-0 sudo[217214]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:45:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:45:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:21.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:45:21 compute-0 python3.9[217371]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 09:45:22 compute-0 python3.9[217522]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:22 compute-0 ceph-mon[74207]: pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:45:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:22 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:45:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:22 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:45:23 compute-0 python3.9[217643]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063922.2280283-3359-211428266274685/.source.xml follow=False _original_basename=secret.xml.j2 checksum=ee7fcb172a9e9a6851069e0487499aace39188fe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:23.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:45:23 compute-0 sudo[217793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snkedxeqekedhxmjsfobcyokmineesqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063923.3289857-3404-239569220475883/AnsiballZ_command.py'
Nov 25 09:45:23 compute-0 sudo[217793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:23 compute-0 python3.9[217795]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:23 compute-0 polkitd[43345]: Registered Authentication Agent for unix-process:217798:270429 (system bus name :1.2905 [pkttyagent --process 217798 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 25 09:45:23 compute-0 polkitd[43345]: Unregistered Authentication Agent for unix-process:217798:270429 (system bus name :1.2905, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 25 09:45:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:23.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:23 compute-0 polkitd[43345]: Registered Authentication Agent for unix-process:217797:270429 (system bus name :1.2906 [pkttyagent --process 217797 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 25 09:45:23 compute-0 polkitd[43345]: Unregistered Authentication Agent for unix-process:217797:270429 (system bus name :1.2906, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 25 09:45:24 compute-0 ceph-mon[74207]: pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:45:24 compute-0 sudo[217793]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=cleanup t=2025-11-25T09:45:25.182271537Z level=info msg="Completed cleanup jobs" duration=4.152559ms
Nov 25 09:45:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:25.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:25 compute-0 python3.9[217959]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=grafana.update.checker t=2025-11-25T09:45:25.271596665Z level=info msg="Update check succeeded" duration=43.732636ms
Nov 25 09:45:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=plugins.update.checker t=2025-11-25T09:45:25.27202864Z level=info msg="Update check succeeded" duration=32.988308ms
Nov 25 09:45:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:45:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:25.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:25 compute-0 sudo[218110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmrqeixmlwtahknxhqbewsxjudbzxpps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063925.4548862-3452-221481813818723/AnsiballZ_command.py'
Nov 25 09:45:25 compute-0 sudo[218110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:25 compute-0 sudo[218110]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:26 compute-0 sudo[218264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzlpdmamgrqsmbmoaozdjvrgnhoxntic ; FSID=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 KEY=AQBHdyVpAAAAABAACuXVpdObkUXtdSdlcr1vHw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063926.1503916-3476-214432994598602/AnsiballZ_command.py'
Nov 25 09:45:26 compute-0 sudo[218264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:26 compute-0 polkitd[43345]: Registered Authentication Agent for unix-process:218267:270711 (system bus name :1.2909 [pkttyagent --process 218267 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 25 09:45:26 compute-0 polkitd[43345]: Unregistered Authentication Agent for unix-process:218267:270711 (system bus name :1.2909, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 25 09:45:26 compute-0 sudo[218264]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:26 compute-0 ceph-mon[74207]: pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:45:26 compute-0 sudo[218422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpqnupqqmxitlacatulgyxnbquterhdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063926.7428133-3500-158395964952043/AnsiballZ_copy.py'
Nov 25 09:45:26 compute-0 sudo[218422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:26.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:26.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:26.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:26.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:27 compute-0 python3.9[218424]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:27 compute-0 sudo[218422]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:27.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:45:27 compute-0 sudo[218574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrprqzhmqvodkftwvfbrluqtlaxhgxng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063927.2942007-3524-233214376982650/AnsiballZ_stat.py'
Nov 25 09:45:27 compute-0 sudo[218574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:27 compute-0 python3.9[218576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:27 compute-0 sudo[218574]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:45:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:27.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:45:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:27 compute-0 sudo[218699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojbdasnxvbhqgaioxgqjafyiewzmtwxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063927.2942007-3524-233214376982650/AnsiballZ_copy.py'
Nov 25 09:45:27 compute-0 sudo[218699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:28 compute-0 python3.9[218701]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063927.2942007-3524-233214376982650/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:28 compute-0 sudo[218699]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:28 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 25 09:45:28 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 25 09:45:28 compute-0 sudo[218851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdihpmpxfscmyxtjyfuseshhqsnnfjhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063928.3608685-3572-280231390907215/AnsiballZ_file.py'
Nov 25 09:45:28 compute-0 sudo[218851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:28 compute-0 python3.9[218853]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:28 compute-0 sudo[218851]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:28 compute-0 ceph-mon[74207]: pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:45:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:45:29 compute-0 sudo[219014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uypdqokuqolvcxxhfltohhpqebplekrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063928.9782689-3596-32610759987109/AnsiballZ_stat.py'
Nov 25 09:45:29 compute-0 sudo[219014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:29.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:45:29 compute-0 python3.9[219016]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:29 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:29 compute-0 sudo[219014]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:29 compute-0 sudo[219097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxjqlljzimnregoboesmujleptgacozn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063928.9782689-3596-32610759987109/AnsiballZ_file.py'
Nov 25 09:45:29 compute-0 sudo[219097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:29 compute-0 python3.9[219099]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:29 compute-0 sudo[219097]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:29.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:45:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:45:29 compute-0 sudo[219251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfgxqgnseqajtndqfdmehkmorwptszon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063929.8077834-3632-246440915042383/AnsiballZ_stat.py'
Nov 25 09:45:29 compute-0 sudo[219251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:30 compute-0 python3.9[219253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:30 compute-0 sudo[219251]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:45:30] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:45:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:45:30] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:45:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:30 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100013b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:30 compute-0 sudo[219329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nanmqlmnmcrvyebqpxgwugljbapixhkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063929.8077834-3632-246440915042383/AnsiballZ_file.py'
Nov 25 09:45:30 compute-0 sudo[219329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:30 compute-0 python3.9[219331]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.vb55knc6 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:30 compute-0 sudo[219329]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:30 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100020b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:30 compute-0 ceph-mon[74207]: pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:45:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:45:30 compute-0 sudo[219481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxuefzuejfwyoqzquzccfjhabrxlyqch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063930.6488216-3668-252296760018038/AnsiballZ_stat.py'
Nov 25 09:45:30 compute-0 sudo[219481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:30 compute-0 python3.9[219483]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:31 compute-0 sudo[219481]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:31 compute-0 sudo[219559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxjgiivdkugsecwwncbezanvmgcewike ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063930.6488216-3668-252296760018038/AnsiballZ_file.py'
Nov 25 09:45:31 compute-0 sudo[219559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:31.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:45:31 compute-0 python3.9[219561]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094531 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:45:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:31 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:31 compute-0 sudo[219559]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:31.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:31 compute-0 sudo[219712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rimvbugcsaxrddzgqtokbqrfpeduzvat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063931.560871-3707-11324670605912/AnsiballZ_command.py'
Nov 25 09:45:31 compute-0 sudo[219712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:31 compute-0 python3.9[219714]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:31 compute-0 sudo[219712]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:32 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08001e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:32 compute-0 sudo[219866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nubtmypxxrqyksxszibcnqvdnbuwopeo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764063932.223142-3731-268858351085081/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 09:45:32 compute-0 sudo[219866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:32 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08001e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:32 compute-0 python3[219868]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 09:45:32 compute-0 sudo[219866]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:32 compute-0 ceph-mon[74207]: pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:45:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:33 compute-0 sudo[220018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcvlkzgfrazioukhgcslaarzusdpqesc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063932.8310258-3755-11523019411945/AnsiballZ_stat.py'
Nov 25 09:45:33 compute-0 sudo[220018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:33 compute-0 python3.9[220020]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:33.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:33 compute-0 sudo[220018]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:45:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:33 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100020b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:33 compute-0 sudo[220096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfudboobuejmoynwjcdlxsdesztjrxjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063932.8310258-3755-11523019411945/AnsiballZ_file.py'
Nov 25 09:45:33 compute-0 sudo[220096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:33 compute-0 python3.9[220098]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:33 compute-0 sudo[220096]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:33.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:33 compute-0 sudo[220250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygcyvetxzpmxqjxctmwtnhctvhzjbtwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063933.7035706-3791-169191031736654/AnsiballZ_stat.py'
Nov 25 09:45:33 compute-0 sudo[220250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:34 compute-0 python3.9[220252]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:34 compute-0 sudo[220250]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:34 compute-0 sudo[220328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqegtxmjxocwwohujejtvodskgrntlbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063933.7035706-3791-169191031736654/AnsiballZ_file.py'
Nov 25 09:45:34 compute-0 sudo[220328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:34 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:34 compute-0 python3.9[220330]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:34 compute-0 sudo[220328]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:34 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08001e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:34 compute-0 sudo[220480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yywngagnicpiqpzjelawvdvhircojkwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063934.5607886-3827-161217237394735/AnsiballZ_stat.py'
Nov 25 09:45:34 compute-0 sudo[220480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:34 compute-0 ceph-mon[74207]: pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:45:34 compute-0 python3.9[220482]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:34 compute-0 sudo[220480]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:35 compute-0 sudo[220558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esylmnimabzorppnxiqqvnowdbpgnnus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063934.5607886-3827-161217237394735/AnsiballZ_file.py'
Nov 25 09:45:35 compute-0 sudo[220558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:35.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:35 compute-0 python3.9[220560]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:35 compute-0 sudo[220558]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:45:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:35 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08001e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:35 compute-0 sudo[220710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akfnstjyxjaorwxueaovwogtfcgegqpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063935.4352107-3863-87828453596688/AnsiballZ_stat.py'
Nov 25 09:45:35 compute-0 sudo[220710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:35.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:35 compute-0 python3.9[220712]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:35 compute-0 sudo[220710]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:35 compute-0 sudo[220790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxnyrwophdcalcfnwkpjetppvtrvwwwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063935.4352107-3863-87828453596688/AnsiballZ_file.py'
Nov 25 09:45:35 compute-0 sudo[220790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:36 compute-0 python3.9[220792]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:36 compute-0 sudo[220790]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:36 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:36 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:36 compute-0 sudo[220955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irbjvxmxzkpfcvbxnshquwdpmbdijbat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063936.3518233-3899-251431561450758/AnsiballZ_stat.py'
Nov 25 09:45:36 compute-0 sudo[220955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:36 compute-0 podman[220916]: 2025-11-25 09:45:36.649621662 +0000 UTC m=+0.039620916 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 09:45:36 compute-0 ceph-mon[74207]: pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:45:36 compute-0 python3.9[220961]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:36 compute-0 sudo[220955]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:36.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:37.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:37.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:37.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:37 compute-0 sudo[221084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiqsqhzskeafwtvsaghdaqtjifkyenpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063936.3518233-3899-251431561450758/AnsiballZ_copy.py'
Nov 25 09:45:37 compute-0 sudo[221084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:37.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:37 compute-0 python3.9[221086]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063936.3518233-3899-251431561450758/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:37 compute-0 sudo[221084]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:45:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:37 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:37 compute-0 sudo[221236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqptehkisadynwlvkdeijwounpzpfwfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063937.4110239-3944-142050720063409/AnsiballZ_file.py'
Nov 25 09:45:37 compute-0 sudo[221236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:37.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:37 compute-0 python3.9[221238]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:37 compute-0 sudo[221236]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:38 compute-0 sudo[221390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdttovrngiefyypelvmlppbedxaqoepi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063937.964885-3968-145921262444631/AnsiballZ_command.py'
Nov 25 09:45:38 compute-0 sudo[221390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:38 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08001e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:38 compute-0 python3.9[221392]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:38 compute-0 sudo[221390]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:38 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:38 compute-0 ceph-mon[74207]: pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:45:38 compute-0 sudo[221545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sonnpjvwivognrbjjjtfuxcdsshpxidb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063938.5916922-3992-189355672358110/AnsiballZ_blockinfile.py'
Nov 25 09:45:38 compute-0 sudo[221545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:39 compute-0 python3.9[221547]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:39 compute-0 sudo[221545]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:39.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:45:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:39 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:39 compute-0 sudo[221697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hretkupztdaamsynkyfrbhpovuwrvrie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063939.3546026-4019-39180859396826/AnsiballZ_command.py'
Nov 25 09:45:39 compute-0 sudo[221697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:39 compute-0 python3.9[221699]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:39 compute-0 sudo[221697]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:39.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:39 compute-0 ceph-mon[74207]: pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:45:40 compute-0 sudo[221852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoellujkhoeajmuogolgibutcixzbevc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063939.9058683-4043-184776643078502/AnsiballZ_stat.py'
Nov 25 09:45:40 compute-0 sudo[221852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:45:40] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:45:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:45:40] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:45:40 compute-0 python3.9[221854]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:45:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:40 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:40 compute-0 sudo[221852]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:40 compute-0 sudo[221881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:45:40 compute-0 sudo[221881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:45:40 compute-0 sudo[221881]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:40 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:40 compute-0 sudo[222031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecnmhhiefiinixngnyljmhrtcxgnnpyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063940.4494722-4067-221191077783720/AnsiballZ_command.py'
Nov 25 09:45:40 compute-0 sudo[222031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:40 compute-0 python3.9[222033]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:45:40 compute-0 sudo[222031]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:41 compute-0 sudo[222186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oineiedzmpnwstnlpqgvliodekrajapi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063941.0292659-4091-222315328295204/AnsiballZ_file.py'
Nov 25 09:45:41 compute-0 sudo[222186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:41.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:45:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:41 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:41 compute-0 python3.9[222188]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:41 compute-0 sudo[222186]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:41.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:41 compute-0 sudo[222340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjcdjzyooyuhvefxqjcoegjnfntofyds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063941.7691743-4115-70993743628178/AnsiballZ_stat.py'
Nov 25 09:45:41 compute-0 sudo[222340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:42 compute-0 python3.9[222342]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:42 compute-0 sudo[222340]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:42 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:42 compute-0 ceph-mon[74207]: pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:45:42 compute-0 sudo[222463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbaqquysmciloiegqbbujrcjoswueqsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063941.7691743-4115-70993743628178/AnsiballZ_copy.py'
Nov 25 09:45:42 compute-0 sudo[222463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:42 compute-0 python3.9[222465]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063941.7691743-4115-70993743628178/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:42 compute-0 sudo[222463]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:42 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:42 compute-0 sudo[222615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwydxjlvglhhdsssiqogphejfuuquyfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063942.6980023-4160-184270464561337/AnsiballZ_stat.py'
Nov 25 09:45:42 compute-0 sudo[222615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:43 compute-0 python3.9[222617]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:43 compute-0 sudo[222615]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:43.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:43 compute-0 sudo[222738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfivmhuxcvstikaissrvqffyqdevuzzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063942.6980023-4160-184270464561337/AnsiballZ_copy.py'
Nov 25 09:45:43 compute-0 sudo[222738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:43 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:43 compute-0 python3.9[222740]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063942.6980023-4160-184270464561337/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:43 compute-0 sudo[222738]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:45:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:43.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:45:43 compute-0 sudo[222891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbyeeywrljnokfpnpfzpuuudvrkqnoad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063943.6701427-4205-276700449353799/AnsiballZ_stat.py'
Nov 25 09:45:43 compute-0 sudo[222891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:44 compute-0 python3.9[222893]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:45:44 compute-0 sudo[222891]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:44 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100061a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:44 compute-0 sudo[223027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ermuhfggnyuwqfqxjpgmkpedxvihdcen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063943.6701427-4205-276700449353799/AnsiballZ_copy.py'
Nov 25 09:45:44 compute-0 sudo[223027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:44 compute-0 podman[222989]: 2025-11-25 09:45:44.298429302 +0000 UTC m=+0.061623860 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:45:44 compute-0 ceph-mon[74207]: pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:44 compute-0 python3.9[223034]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063943.6701427-4205-276700449353799/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:45:44 compute-0 sudo[223027]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:44 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc140094b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:45:44
Nov 25 09:45:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:45:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:45:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', '.nfs', 'cephfs.cephfs.data', 'backups', 'volumes']
Nov 25 09:45:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:45:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:45:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:45:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:45:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:45:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:45:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:45:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:45:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:45:45 compute-0 sudo[223190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvfozmnrildifzngvlgwkyddldfqzgre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063944.6281073-4250-21822061209089/AnsiballZ_systemd.py'
Nov 25 09:45:45 compute-0 sudo[223190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:45:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:45:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:45:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:45:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:45:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:45:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:45:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:45:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:45:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:45:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:45.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:45 compute-0 python3.9[223192]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:45:45 compute-0 systemd[1]: Reloading.
Nov 25 09:45:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:45 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100061a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:45 compute-0 systemd-sysv-generator[223220]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:45 compute-0 systemd-rc-local-generator[223215]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:45:45 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 25 09:45:45 compute-0 sudo[223190]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:45.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:45 compute-0 sudo[223383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdtxiuojbbeabanvuieukdwempkokimq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063945.7679615-4274-259925032818972/AnsiballZ_systemd.py'
Nov 25 09:45:45 compute-0 sudo[223383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:46 compute-0 python3.9[223385]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 09:45:46 compute-0 systemd[1]: Reloading.
Nov 25 09:45:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:46 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08004af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:46 compute-0 systemd-rc-local-generator[223406]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:46 compute-0 systemd-sysv-generator[223415]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:46 compute-0 ceph-mon[74207]: pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:46 compute-0 systemd[1]: Reloading.
Nov 25 09:45:46 compute-0 systemd-sysv-generator[223445]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:45:46 compute-0 systemd-rc-local-generator[223442]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:45:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:46 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100061a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:46 compute-0 sudo[223383]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:46.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:46.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:46.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:46.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:47 compute-0 sshd-session[165006]: Connection closed by 192.168.122.30 port 38916
Nov 25 09:45:47 compute-0 sshd-session[165003]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:45:47 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Nov 25 09:45:47 compute-0 systemd[1]: session-53.scope: Consumed 2min 23.586s CPU time.
Nov 25 09:45:47 compute-0 systemd-logind[744]: Session 53 logged out. Waiting for processes to exit.
Nov 25 09:45:47 compute-0 systemd-logind[744]: Removed session 53.
Nov 25 09:45:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:47.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:45:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:47 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:47.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:48 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10006340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:48 compute-0 ceph-mon[74207]: pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:45:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:48 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08004af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:49.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:49 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10006340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:49.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:45:50] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:45:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:45:50] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:45:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:50 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:50 compute-0 ceph-mon[74207]: pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:50 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10006340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:51.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:45:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:51 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08004af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:51.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:52 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:52 compute-0 ceph-mon[74207]: pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:45:52 compute-0 sshd-session[223487]: Accepted publickey for zuul from 192.168.122.30 port 55894 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:45:52 compute-0 systemd-logind[744]: New session 54 of user zuul.
Nov 25 09:45:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:52 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:52 compute-0 systemd[1]: Started Session 54 of User zuul.
Nov 25 09:45:52 compute-0 sshd-session[223487]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:45:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:53.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:53 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:53 compute-0 python3.9[223640]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:45:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:53.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:54 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08004af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:54 compute-0 ceph-mon[74207]: pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:54 compute-0 python3.9[223796]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:45:54 compute-0 network[223813]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:45:54 compute-0 network[223814]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:45:54 compute-0 network[223815]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:45:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:54 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:55.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:45:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:55 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:45:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:45:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:55.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:56 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:56 compute-0 ceph-mon[74207]: pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:56 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08004af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:56.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:56.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:56.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:45:56.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:45:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:57.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:45:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:57 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:45:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:57.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:45:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:45:58 compute-0 sudo[224089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcmjnwufpyfcgplcocjzhkcuyriquvyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063957.8895926-101-148482191800438/AnsiballZ_setup.py'
Nov 25 09:45:58 compute-0 sudo[224089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:58 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:58 compute-0 python3.9[224091]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 09:45:58 compute-0 ceph-mon[74207]: pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:45:58 compute-0 sudo[224089]: pam_unix(sudo:session): session closed for user root
Nov 25 09:45:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:58 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:58 compute-0 sudo[224173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qizyevsdalywyakzzvvopnopsxxojisu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063957.8895926-101-148482191800438/AnsiballZ_dnf.py'
Nov 25 09:45:58 compute-0 sudo[224173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:45:59 compute-0 python3.9[224175]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:45:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:59.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:45:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:45:59 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08004af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:45:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:45:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:45:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:59.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:45:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:45:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:46:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:46:00] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 25 09:46:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:46:00] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 25 09:46:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:00 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:00 compute-0 ceph-mon[74207]: pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:46:00 compute-0 sudo[224179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:46:00 compute-0 sudo[224179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:00 compute-0 sudo[224179]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:00 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:01.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:01 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:01.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:02 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:02 compute-0 ceph-mon[74207]: pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:02 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:03 compute-0 sudo[224173]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:03.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:03 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc2c002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:03 compute-0 sudo[224358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynhmyyemxxlsuwwfbjrlpytqmsdrinqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063963.3372357-137-133612750298024/AnsiballZ_stat.py'
Nov 25 09:46:03 compute-0 sudo[224358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:46:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:46:03 compute-0 python3.9[224360]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:03 compute-0 sudo[224358]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:04 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc280039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:04 compute-0 sudo[224512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptfvxshdequvlixbwesugikeenzcnhlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063964.0358882-167-105074650567467/AnsiballZ_command.py'
Nov 25 09:46:04 compute-0 sudo[224512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:04 compute-0 ceph-mon[74207]: pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:04 compute-0 python3.9[224514]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:46:04 compute-0 sudo[224512]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:04 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:04 compute-0 sudo[224665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmtoivesfgkypdijsbtzyluihaiqbtny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063964.7571318-197-179403076308186/AnsiballZ_stat.py'
Nov 25 09:46:04 compute-0 sudo[224665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:05 compute-0 python3.9[224667]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:05 compute-0 sudo[224665]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:46:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:05.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:46:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:05 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:46:05.374 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:46:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:46:05.374 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:46:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:46:05.374 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:46:05 compute-0 sudo[224817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-finrardjltksdtxbojzwtucyvzxqfmwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063965.228762-221-176516171470583/AnsiballZ_command.py'
Nov 25 09:46:05 compute-0 sudo[224817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:05 compute-0 python3.9[224819]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:46:05 compute-0 sudo[224817]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:46:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:05.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:46:05 compute-0 sudo[224972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrkhlrltikkyhomyeszztzvcnyehrmsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063965.727496-245-202837542733670/AnsiballZ_stat.py'
Nov 25 09:46:05 compute-0 sudo[224972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:06 compute-0 python3.9[224974]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:06 compute-0 sudo[224972]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:06 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc2c003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:06 compute-0 sudo[225095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhmwirzijdmunucqkukdrjivhqmwvotm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063965.727496-245-202837542733670/AnsiballZ_copy.py'
Nov 25 09:46:06 compute-0 sudo[225095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:06 compute-0 ceph-mon[74207]: pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:06 compute-0 python3.9[225097]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063965.727496-245-202837542733670/.source.iscsi _original_basename=.hn8r_jmv follow=False checksum=e16517866bfc16b409f5057714190a6d34e3e59d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:06 compute-0 sudo[225095]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:06 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc280044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:06 compute-0 podman[225197]: 2025-11-25 09:46:06.973259006 +0000 UTC m=+0.038709876 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 09:46:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:06.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:06.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:06.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:06.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:07 compute-0 sudo[225263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weuckmvxsaopvlzrtyrpdfqfhskromeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063966.694126-290-57949727782889/AnsiballZ_file.py'
Nov 25 09:46:07 compute-0 sudo[225263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:07 compute-0 python3.9[225265]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:07 compute-0 sudo[225263]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:07.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:46:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:07 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:07 compute-0 sudo[225415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cregzarjpeakmdcbmeymuemoegaekdsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063967.3313763-314-157549489313795/AnsiballZ_lineinfile.py'
Nov 25 09:46:07 compute-0 sudo[225415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:07.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:07 compute-0 python3.9[225417]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:07 compute-0 sudo[225415]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:08 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:08 compute-0 ceph-mon[74207]: pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:46:08 compute-0 sudo[225569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uegoisofwhnncpmbobtbusxogxvrnkma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063968.045326-341-76004612109507/AnsiballZ_systemd_service.py'
Nov 25 09:46:08 compute-0 sudo[225569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:08 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc2c003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:08 compute-0 python3.9[225571]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:46:08 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 25 09:46:08 compute-0 sudo[225569]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:09 compute-0 sudo[225725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjhwhgfxwznpxgtncgmgioytizjbjpeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063969.0679522-365-195881757471312/AnsiballZ_systemd_service.py'
Nov 25 09:46:09 compute-0 sudo[225725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:09.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:09 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc280044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:09 compute-0 python3.9[225727]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:46:09 compute-0 systemd[1]: Reloading.
Nov 25 09:46:09 compute-0 systemd-rc-local-generator[225750]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:09 compute-0 systemd-sysv-generator[225754]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:09.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:09 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 09:46:09 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 25 09:46:09 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 25 09:46:09 compute-0 systemd[1]: Started Open-iSCSI.
Nov 25 09:46:09 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 25 09:46:09 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 25 09:46:09 compute-0 sudo[225725]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:46:10] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 25 09:46:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:46:10] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 25 09:46:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:10 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc340bf270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:10 compute-0 ceph-mon[74207]: pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:10 compute-0 sudo[225927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvzeutcjqsyejdpccfkmnjxlayabzgqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063970.3179398-398-263820782757349/AnsiballZ_service_facts.py'
Nov 25 09:46:10 compute-0 sudo[225927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:10 compute-0 python3.9[225929]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:46:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:10 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:10 compute-0 network[225946]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:46:10 compute-0 network[225947]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:46:10 compute-0 network[225948]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:46:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:46:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:11.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:46:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:11 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc2c003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:46:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:11.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:46:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:12 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc280044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:12 compute-0 ceph-mon[74207]: pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:12 compute-0 sudo[225927]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:12 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc280044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:13.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:13 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc2c003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:13.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:14 compute-0 sudo[226222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzthefnincqyvsjkgsbzigvbljltvupa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063974.0480144-428-26370671076820/AnsiballZ_file.py'
Nov 25 09:46:14 compute-0 sudo[226222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:14 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:14 compute-0 python3.9[226224]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 09:46:14 compute-0 sudo[226222]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:14 compute-0 ceph-mon[74207]: pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:14 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc280044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:14 compute-0 sudo[226385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sannrzoejuhalakhymufsqltparpllwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063974.5727851-452-19528397613889/AnsiballZ_modprobe.py'
Nov 25 09:46:14 compute-0 sudo[226385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:14 compute-0 podman[226348]: 2025-11-25 09:46:14.915135063 +0000 UTC m=+0.064185910 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:46:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:46:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:46:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:46:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:46:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:46:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:46:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:46:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:46:15 compute-0 python3.9[226393]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 25 09:46:15 compute-0 sudo[226385]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:15.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:15 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc280044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:15 compute-0 sudo[226554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxoprcwgdmkmvlimnnginfuzggillrun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063975.1999543-476-275740103114675/AnsiballZ_stat.py'
Nov 25 09:46:15 compute-0 sudo[226554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:46:15 compute-0 python3.9[226556]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:15 compute-0 sudo[226554]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:46:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:15.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:46:15 compute-0 sudo[226678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbupqkrbojevydxpwryldggewnruaxff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063975.1999543-476-275740103114675/AnsiballZ_copy.py'
Nov 25 09:46:15 compute-0 sudo[226678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:15 compute-0 python3.9[226680]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063975.1999543-476-275740103114675/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:15 compute-0 sudo[226682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:46:15 compute-0 sudo[226682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:15 compute-0 sudo[226682]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:15 compute-0 sudo[226678]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:15 compute-0 sudo[226707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:46:15 compute-0 sudo[226707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:16 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc2c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:16 compute-0 sudo[226707]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:16 compute-0 sudo[226910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dilkzplnsbkyvglnrcmkqdfnpqqchhmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063976.196658-524-207781292800278/AnsiballZ_lineinfile.py'
Nov 25 09:46:16 compute-0 sudo[226910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:16 compute-0 ceph-mon[74207]: pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:16 compute-0 python3.9[226912]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:16 compute-0 sudo[226910]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:16 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:16.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:17.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:17.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:17.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:17 compute-0 sudo[227062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xybfileyzkydylvehusixvguojvycwcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063976.7884905-548-20117757639598/AnsiballZ_systemd.py'
Nov 25 09:46:17 compute-0 sudo[227062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:17.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:46:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:17 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc14009dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:17 compute-0 python3.9[227064]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:46:17 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 09:46:17 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 25 09:46:17 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 25 09:46:17 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 09:46:17 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 09:46:17 compute-0 sudo[227062]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:46:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:46:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:17.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:17 compute-0 sudo[227220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qptigxiazjvdnmswbgxrzvojfbxpfovr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063977.7250676-572-178236417581078/AnsiballZ_file.py'
Nov 25 09:46:17 compute-0 sudo[227220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:18 compute-0 python3.9[227222]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:18 compute-0 sudo[227220]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:46:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:46:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:46:18 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:46:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:46:18 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:46:18 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:46:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:46:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:46:18 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:46:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:46:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:46:18 compute-0 sudo[227246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:46:18 compute-0 sudo[227246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:18 compute-0 sudo[227246]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:18 compute-0 sudo[227272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:46:18 compute-0 sudo[227272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:18 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc340c00f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:18 compute-0 podman[227400]: 2025-11-25 09:46:18.485877442 +0000 UTC m=+0.027226455 container create e5ec449d4eaddb7fb06f8cc0d3725e276618299b07d5ca2d90dcbc0188098378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_galileo, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Nov 25 09:46:18 compute-0 systemd[1]: Started libpod-conmon-e5ec449d4eaddb7fb06f8cc0d3725e276618299b07d5ca2d90dcbc0188098378.scope.
Nov 25 09:46:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:46:18 compute-0 podman[227400]: 2025-11-25 09:46:18.531461328 +0000 UTC m=+0.072810351 container init e5ec449d4eaddb7fb06f8cc0d3725e276618299b07d5ca2d90dcbc0188098378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_galileo, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:46:18 compute-0 podman[227400]: 2025-11-25 09:46:18.536212296 +0000 UTC m=+0.077561308 container start e5ec449d4eaddb7fb06f8cc0d3725e276618299b07d5ca2d90dcbc0188098378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_galileo, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:46:18 compute-0 podman[227400]: 2025-11-25 09:46:18.537228552 +0000 UTC m=+0.078577565 container attach e5ec449d4eaddb7fb06f8cc0d3725e276618299b07d5ca2d90dcbc0188098378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_galileo, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:46:18 compute-0 upbeat_galileo[227437]: 167 167
Nov 25 09:46:18 compute-0 systemd[1]: libpod-e5ec449d4eaddb7fb06f8cc0d3725e276618299b07d5ca2d90dcbc0188098378.scope: Deactivated successfully.
Nov 25 09:46:18 compute-0 conmon[227437]: conmon e5ec449d4eaddb7fb06f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5ec449d4eaddb7fb06f8cc0d3725e276618299b07d5ca2d90dcbc0188098378.scope/container/memory.events
Nov 25 09:46:18 compute-0 podman[227400]: 2025-11-25 09:46:18.540413838 +0000 UTC m=+0.081762851 container died e5ec449d4eaddb7fb06f8cc0d3725e276618299b07d5ca2d90dcbc0188098378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 09:46:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-35768cd3daad44842262a081645678abc28ac9ef21e8f5a35b90a928239f4f22-merged.mount: Deactivated successfully.
Nov 25 09:46:18 compute-0 podman[227400]: 2025-11-25 09:46:18.569068966 +0000 UTC m=+0.110417978 container remove e5ec449d4eaddb7fb06f8cc0d3725e276618299b07d5ca2d90dcbc0188098378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:46:18 compute-0 podman[227400]: 2025-11-25 09:46:18.474714295 +0000 UTC m=+0.016063318 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:46:18 compute-0 systemd[1]: libpod-conmon-e5ec449d4eaddb7fb06f8cc0d3725e276618299b07d5ca2d90dcbc0188098378.scope: Deactivated successfully.
Nov 25 09:46:18 compute-0 sudo[227481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmapqkmrixthyfqfpxuwofcjvcbuyvci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063978.3433597-599-162372212943999/AnsiballZ_stat.py'
Nov 25 09:46:18 compute-0 sudo[227481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:18 compute-0 ceph-mon[74207]: pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:46:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:46:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:46:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:46:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:46:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:46:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:18 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc340c00f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:18 compute-0 podman[227491]: 2025-11-25 09:46:18.691368194 +0000 UTC m=+0.029026689 container create 679cd4e899f092ba3b46dd8e1973558adf6577983e33025670f5fc5ab5fadf2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_swanson, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 09:46:18 compute-0 systemd[1]: Started libpod-conmon-679cd4e899f092ba3b46dd8e1973558adf6577983e33025670f5fc5ab5fadf2f.scope.
Nov 25 09:46:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34462d23dc493b2bd97ff8759b78b319bac2e0040cb3b69da109524e93f95106/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34462d23dc493b2bd97ff8759b78b319bac2e0040cb3b69da109524e93f95106/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34462d23dc493b2bd97ff8759b78b319bac2e0040cb3b69da109524e93f95106/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34462d23dc493b2bd97ff8759b78b319bac2e0040cb3b69da109524e93f95106/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34462d23dc493b2bd97ff8759b78b319bac2e0040cb3b69da109524e93f95106/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:18 compute-0 podman[227491]: 2025-11-25 09:46:18.748069151 +0000 UTC m=+0.085727666 container init 679cd4e899f092ba3b46dd8e1973558adf6577983e33025670f5fc5ab5fadf2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:46:18 compute-0 podman[227491]: 2025-11-25 09:46:18.75455552 +0000 UTC m=+0.092214015 container start 679cd4e899f092ba3b46dd8e1973558adf6577983e33025670f5fc5ab5fadf2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_swanson, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 25 09:46:18 compute-0 python3.9[227485]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:18 compute-0 podman[227491]: 2025-11-25 09:46:18.756063313 +0000 UTC m=+0.093721809 container attach 679cd4e899f092ba3b46dd8e1973558adf6577983e33025670f5fc5ab5fadf2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_swanson, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:46:18 compute-0 sudo[227481]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:18 compute-0 podman[227491]: 2025-11-25 09:46:18.680007925 +0000 UTC m=+0.017666430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:46:19 compute-0 elegant_swanson[227504]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:46:19 compute-0 elegant_swanson[227504]: --> All data devices are unavailable
Nov 25 09:46:19 compute-0 systemd[1]: libpod-679cd4e899f092ba3b46dd8e1973558adf6577983e33025670f5fc5ab5fadf2f.scope: Deactivated successfully.
Nov 25 09:46:19 compute-0 podman[227491]: 2025-11-25 09:46:19.023841325 +0000 UTC m=+0.361499820 container died 679cd4e899f092ba3b46dd8e1973558adf6577983e33025670f5fc5ab5fadf2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_swanson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-34462d23dc493b2bd97ff8759b78b319bac2e0040cb3b69da109524e93f95106-merged.mount: Deactivated successfully.
Nov 25 09:46:19 compute-0 podman[227491]: 2025-11-25 09:46:19.04753101 +0000 UTC m=+0.385189506 container remove 679cd4e899f092ba3b46dd8e1973558adf6577983e33025670f5fc5ab5fadf2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:46:19 compute-0 systemd[1]: libpod-conmon-679cd4e899f092ba3b46dd8e1973558adf6577983e33025670f5fc5ab5fadf2f.scope: Deactivated successfully.
Nov 25 09:46:19 compute-0 sudo[227272]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:19 compute-0 sudo[227648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:46:19 compute-0 sudo[227648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:19 compute-0 sudo[227648]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:19 compute-0 sudo[227712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmaakxiwcocurmwrrmdzzlapyfsicobf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063978.9591641-626-131550478180010/AnsiballZ_stat.py'
Nov 25 09:46:19 compute-0 sudo[227712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:19 compute-0 sudo[227693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:46:19 compute-0 sudo[227693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:19.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:19 compute-0 python3.9[227728]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:19 compute-0 sudo[227712]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:19 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:19 compute-0 podman[227786]: 2025-11-25 09:46:19.461865928 +0000 UTC m=+0.030478317 container create 17b6a740a44a854a3facfc85554192fa52c1f7e3e471bccd4a4a817a3e9e54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noyce, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:46:19 compute-0 systemd[1]: Started libpod-conmon-17b6a740a44a854a3facfc85554192fa52c1f7e3e471bccd4a4a817a3e9e54cb.scope.
Nov 25 09:46:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:46:19 compute-0 podman[227786]: 2025-11-25 09:46:19.511834351 +0000 UTC m=+0.080446759 container init 17b6a740a44a854a3facfc85554192fa52c1f7e3e471bccd4a4a817a3e9e54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noyce, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:46:19 compute-0 podman[227786]: 2025-11-25 09:46:19.516684115 +0000 UTC m=+0.085296494 container start 17b6a740a44a854a3facfc85554192fa52c1f7e3e471bccd4a4a817a3e9e54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 25 09:46:19 compute-0 podman[227786]: 2025-11-25 09:46:19.518549572 +0000 UTC m=+0.087161971 container attach 17b6a740a44a854a3facfc85554192fa52c1f7e3e471bccd4a4a817a3e9e54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:46:19 compute-0 affectionate_noyce[227822]: 167 167
Nov 25 09:46:19 compute-0 systemd[1]: libpod-17b6a740a44a854a3facfc85554192fa52c1f7e3e471bccd4a4a817a3e9e54cb.scope: Deactivated successfully.
Nov 25 09:46:19 compute-0 podman[227786]: 2025-11-25 09:46:19.520535706 +0000 UTC m=+0.089148085 container died 17b6a740a44a854a3facfc85554192fa52c1f7e3e471bccd4a4a817a3e9e54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Nov 25 09:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc99c5f684255cb0a87bd7d8380cbc2d772e45c64089f972001a9cb0d3a83cdc-merged.mount: Deactivated successfully.
Nov 25 09:46:19 compute-0 podman[227786]: 2025-11-25 09:46:19.541822271 +0000 UTC m=+0.110434650 container remove 17b6a740a44a854a3facfc85554192fa52c1f7e3e471bccd4a4a817a3e9e54cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noyce, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:46:19 compute-0 podman[227786]: 2025-11-25 09:46:19.450138606 +0000 UTC m=+0.018751006 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:46:19 compute-0 systemd[1]: libpod-conmon-17b6a740a44a854a3facfc85554192fa52c1f7e3e471bccd4a4a817a3e9e54cb.scope: Deactivated successfully.
Nov 25 09:46:19 compute-0 podman[227904]: 2025-11-25 09:46:19.664473984 +0000 UTC m=+0.031295027 container create 0cbfccab72176036a4a28971c81dd6d1299bf7a435e2bf7cf017b3eae51c1e3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:46:19 compute-0 systemd[1]: Started libpod-conmon-0cbfccab72176036a4a28971c81dd6d1299bf7a435e2bf7cf017b3eae51c1e3c.scope.
Nov 25 09:46:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:46:19 compute-0 sudo[227962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zipvyjeqlrqgkgxkgkzkgestwmhnwbrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063979.4922097-650-54766039536723/AnsiballZ_stat.py'
Nov 25 09:46:19 compute-0 sudo[227962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610325ee489a93c02c7784d6f77af5603ce1baeb7aedec09afb3bf6277741d1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610325ee489a93c02c7784d6f77af5603ce1baeb7aedec09afb3bf6277741d1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610325ee489a93c02c7784d6f77af5603ce1baeb7aedec09afb3bf6277741d1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610325ee489a93c02c7784d6f77af5603ce1baeb7aedec09afb3bf6277741d1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:19 compute-0 podman[227904]: 2025-11-25 09:46:19.721279248 +0000 UTC m=+0.088100300 container init 0cbfccab72176036a4a28971c81dd6d1299bf7a435e2bf7cf017b3eae51c1e3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_perlman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 09:46:19 compute-0 podman[227904]: 2025-11-25 09:46:19.726816008 +0000 UTC m=+0.093637050 container start 0cbfccab72176036a4a28971c81dd6d1299bf7a435e2bf7cf017b3eae51c1e3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 09:46:19 compute-0 podman[227904]: 2025-11-25 09:46:19.728052428 +0000 UTC m=+0.094873470 container attach 0cbfccab72176036a4a28971c81dd6d1299bf7a435e2bf7cf017b3eae51c1e3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:46:19 compute-0 podman[227904]: 2025-11-25 09:46:19.652868924 +0000 UTC m=+0.019689976 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:46:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:19.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:19 compute-0 python3.9[227967]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:19 compute-0 sudo[227962]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:19 compute-0 distracted_perlman[227963]: {
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:     "1": [
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:         {
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             "devices": [
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "/dev/loop3"
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             ],
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             "lv_name": "ceph_lv0",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             "lv_size": "21470642176",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             "name": "ceph_lv0",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             "tags": {
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.cluster_name": "ceph",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.crush_device_class": "",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.encrypted": "0",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.osd_id": "1",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.type": "block",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.vdo": "0",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:                 "ceph.with_tpm": "0"
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             },
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             "type": "block",
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:             "vg_name": "ceph_vg0"
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:         }
Nov 25 09:46:19 compute-0 distracted_perlman[227963]:     ]
Nov 25 09:46:19 compute-0 distracted_perlman[227963]: }
Nov 25 09:46:19 compute-0 systemd[1]: libpod-0cbfccab72176036a4a28971c81dd6d1299bf7a435e2bf7cf017b3eae51c1e3c.scope: Deactivated successfully.
Nov 25 09:46:19 compute-0 podman[227904]: 2025-11-25 09:46:19.952201039 +0000 UTC m=+0.319022082 container died 0cbfccab72176036a4a28971c81dd6d1299bf7a435e2bf7cf017b3eae51c1e3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 25 09:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-610325ee489a93c02c7784d6f77af5603ce1baeb7aedec09afb3bf6277741d1e-merged.mount: Deactivated successfully.
Nov 25 09:46:19 compute-0 podman[227904]: 2025-11-25 09:46:19.974705661 +0000 UTC m=+0.341526702 container remove 0cbfccab72176036a4a28971c81dd6d1299bf7a435e2bf7cf017b3eae51c1e3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:46:19 compute-0 systemd[1]: libpod-conmon-0cbfccab72176036a4a28971c81dd6d1299bf7a435e2bf7cf017b3eae51c1e3c.scope: Deactivated successfully.
Nov 25 09:46:20 compute-0 sudo[227693]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:20 compute-0 sudo[228055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:46:20 compute-0 sudo[228055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:20 compute-0 sudo[228055]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:20 compute-0 sudo[228103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:46:20 compute-0 sudo[228103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:20 compute-0 sudo[228155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scozugnshoqfgdaapvywceymqjwtupkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063979.4922097-650-54766039536723/AnsiballZ_copy.py'
Nov 25 09:46:20 compute-0 sudo[228155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:46:20] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:46:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:46:20] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:46:20 compute-0 python3.9[228157]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063979.4922097-650-54766039536723/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:20 compute-0 sudo[228155]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:20 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc28006540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:20 compute-0 podman[228213]: 2025-11-25 09:46:20.400612736 +0000 UTC m=+0.029946062 container create 5ebb866e953e4095f759946579596facb4a6fbbd4eaf5ccc87605dbe13a86982 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_williams, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:46:20 compute-0 systemd[1]: Started libpod-conmon-5ebb866e953e4095f759946579596facb4a6fbbd4eaf5ccc87605dbe13a86982.scope.
Nov 25 09:46:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:46:20 compute-0 podman[228213]: 2025-11-25 09:46:20.458020085 +0000 UTC m=+0.087353431 container init 5ebb866e953e4095f759946579596facb4a6fbbd4eaf5ccc87605dbe13a86982 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_williams, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:46:20 compute-0 podman[228213]: 2025-11-25 09:46:20.463711055 +0000 UTC m=+0.093044380 container start 5ebb866e953e4095f759946579596facb4a6fbbd4eaf5ccc87605dbe13a86982 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:46:20 compute-0 podman[228213]: 2025-11-25 09:46:20.467572175 +0000 UTC m=+0.096905521 container attach 5ebb866e953e4095f759946579596facb4a6fbbd4eaf5ccc87605dbe13a86982 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:46:20 compute-0 happy_williams[228250]: 167 167
Nov 25 09:46:20 compute-0 podman[228213]: 2025-11-25 09:46:20.468681456 +0000 UTC m=+0.098014783 container died 5ebb866e953e4095f759946579596facb4a6fbbd4eaf5ccc87605dbe13a86982 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:46:20 compute-0 systemd[1]: libpod-5ebb866e953e4095f759946579596facb4a6fbbd4eaf5ccc87605dbe13a86982.scope: Deactivated successfully.
Nov 25 09:46:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d33a7602f203aee404bb78c01f1a4499f1144b63c5d3478d9f9f59ab3e94594f-merged.mount: Deactivated successfully.
Nov 25 09:46:20 compute-0 podman[228213]: 2025-11-25 09:46:20.389140376 +0000 UTC m=+0.018473701 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:46:20 compute-0 sudo[228260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:46:20 compute-0 podman[228213]: 2025-11-25 09:46:20.491968342 +0000 UTC m=+0.121301669 container remove 5ebb866e953e4095f759946579596facb4a6fbbd4eaf5ccc87605dbe13a86982 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_williams, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:46:20 compute-0 sudo[228260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:20 compute-0 sudo[228260]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:20 compute-0 systemd[1]: libpod-conmon-5ebb866e953e4095f759946579596facb4a6fbbd4eaf5ccc87605dbe13a86982.scope: Deactivated successfully.
Nov 25 09:46:20 compute-0 ceph-mon[74207]: pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:20 compute-0 podman[228372]: 2025-11-25 09:46:20.614608414 +0000 UTC m=+0.030423553 container create 7574eadc0b6001b9e8a82ac48bd420226ec1f035c5882a42efbabec66099daa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:46:20 compute-0 systemd[1]: Started libpod-conmon-7574eadc0b6001b9e8a82ac48bd420226ec1f035c5882a42efbabec66099daa7.scope.
Nov 25 09:46:20 compute-0 sudo[228412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zihitdbyrdwrbfccekbskjxjumctivcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063980.426513-695-158807781020173/AnsiballZ_command.py'
Nov 25 09:46:20 compute-0 sudo[228412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:20 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc2c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8065776a47c14beb2b7a9fbe4ca7eb50e8917b67f942f7dfdc38ee92953d83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8065776a47c14beb2b7a9fbe4ca7eb50e8917b67f942f7dfdc38ee92953d83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8065776a47c14beb2b7a9fbe4ca7eb50e8917b67f942f7dfdc38ee92953d83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8065776a47c14beb2b7a9fbe4ca7eb50e8917b67f942f7dfdc38ee92953d83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:20 compute-0 podman[228372]: 2025-11-25 09:46:20.68224058 +0000 UTC m=+0.098055739 container init 7574eadc0b6001b9e8a82ac48bd420226ec1f035c5882a42efbabec66099daa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:46:20 compute-0 podman[228372]: 2025-11-25 09:46:20.687794853 +0000 UTC m=+0.103609992 container start 7574eadc0b6001b9e8a82ac48bd420226ec1f035c5882a42efbabec66099daa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_wilson, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 09:46:20 compute-0 podman[228372]: 2025-11-25 09:46:20.689021536 +0000 UTC m=+0.104836675 container attach 7574eadc0b6001b9e8a82ac48bd420226ec1f035c5882a42efbabec66099daa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:46:20 compute-0 podman[228372]: 2025-11-25 09:46:20.603804564 +0000 UTC m=+0.019619723 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:46:20 compute-0 python3.9[228416]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:46:20 compute-0 sudo[228412]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:21 compute-0 sudo[228639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yguvlzmfkfzmeabcshdcnfgubwjbfggk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063980.9708688-719-25695473105087/AnsiballZ_lineinfile.py'
Nov 25 09:46:21 compute-0 awesome_wilson[228414]: {}
Nov 25 09:46:21 compute-0 sudo[228639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:21 compute-0 systemd[1]: libpod-7574eadc0b6001b9e8a82ac48bd420226ec1f035c5882a42efbabec66099daa7.scope: Deactivated successfully.
Nov 25 09:46:21 compute-0 podman[228372]: 2025-11-25 09:46:21.189660224 +0000 UTC m=+0.605475363 container died 7574eadc0b6001b9e8a82ac48bd420226ec1f035c5882a42efbabec66099daa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_wilson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:46:21 compute-0 lvm[228644]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:46:21 compute-0 lvm[228644]: VG ceph_vg0 finished
Nov 25 09:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8065776a47c14beb2b7a9fbe4ca7eb50e8917b67f942f7dfdc38ee92953d83-merged.mount: Deactivated successfully.
Nov 25 09:46:21 compute-0 podman[228372]: 2025-11-25 09:46:21.212597872 +0000 UTC m=+0.628413021 container remove 7574eadc0b6001b9e8a82ac48bd420226ec1f035c5882a42efbabec66099daa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_wilson, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 09:46:21 compute-0 systemd[1]: libpod-conmon-7574eadc0b6001b9e8a82ac48bd420226ec1f035c5882a42efbabec66099daa7.scope: Deactivated successfully.
Nov 25 09:46:21 compute-0 sudo[228103]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:46:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:46:21 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:46:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:21.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:46:21 compute-0 sudo[228656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:46:21 compute-0 sudo[228656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:21 compute-0 sudo[228656]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:21 compute-0 python3.9[228645]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:21 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc2c0049a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:21 compute-0 sudo[228639]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:46:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:21.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:46:21 compute-0 sudo[228831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evgqzkjhlsrfiphtuzynzxurzxadlxqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063981.5099447-743-243788682028052/AnsiballZ_replace.py'
Nov 25 09:46:21 compute-0 sudo[228831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:21 compute-0 python3.9[228833]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:21 compute-0 sudo[228831]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:22 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:22 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:46:22 compute-0 ceph-mon[74207]: pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:22 compute-0 sudo[228984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axjjubrgwakpyjycreuuvatxeeahcbmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063982.1157615-767-185312201265721/AnsiballZ_replace.py'
Nov 25 09:46:22 compute-0 sudo[228984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:22 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc340c00f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:22 compute-0 python3.9[228986]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:22 compute-0 sudo[228984]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:22 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc28006540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:22 compute-0 sudo[229136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeajutaaugtxdjlwtmtoddgqfomxvocv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063982.657487-794-104844297860824/AnsiballZ_lineinfile.py'
Nov 25 09:46:22 compute-0 sudo[229136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:22 compute-0 python3.9[229138]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:23 compute-0 sudo[229136]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:23.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:23 compute-0 sudo[229288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpokdfbkzntigqnduvxsxtvlhtxqlgjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063983.0973694-794-68486503819/AnsiballZ_lineinfile.py'
Nov 25 09:46:23 compute-0 sudo[229288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:23 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:23 compute-0 python3.9[229290]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:23 compute-0 sudo[229288]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:23 compute-0 sudo[229441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auelnukdqcqqpeldglvtlwgosbkxpokc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063983.5377948-794-35200468997019/AnsiballZ_lineinfile.py'
Nov 25 09:46:23 compute-0 sudo[229441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:23.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:23 compute-0 python3.9[229443]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:23 compute-0 sudo[229441]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:24 compute-0 sudo[229594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxrknwpkqmteuyrgakfryprgtthmfmrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063983.9733982-794-279954263161425/AnsiballZ_lineinfile.py'
Nov 25 09:46:24 compute-0 sudo[229594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:24 compute-0 python3.9[229596]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:24 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc2c0056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:24 compute-0 sudo[229594]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:24 compute-0 ceph-mon[74207]: pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:24 compute-0 sudo[229746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtxwqzlsyyaovwcmifgvcelgyfmuxizv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063984.4725814-881-12854628147675/AnsiballZ_stat.py'
Nov 25 09:46:24 compute-0 sudo[229746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:24 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc340c1610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:24 compute-0 python3.9[229748]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:24 compute-0 sudo[229746]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:25 compute-0 sudo[229900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iipntbkdtywkouseevzfoivyukanlgdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063984.9840174-905-163428453042393/AnsiballZ_file.py'
Nov 25 09:46:25 compute-0 sudo[229900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:46:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:25.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:46:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:25 compute-0 python3.9[229902]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:25 compute-0 sudo[229900]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:25 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc28006540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:25.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:25 compute-0 sudo[230053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edldrdklvjecftexvkrsoacsjjtdzlah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063985.6111977-932-107015692806084/AnsiballZ_file.py'
Nov 25 09:46:25 compute-0 sudo[230053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:25 compute-0 python3.9[230055]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:25 compute-0 sudo[230053]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:26 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:26 compute-0 sudo[230206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfpjbkvuyckvpwfgygmhcyszwjmyemxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063986.1309023-956-233752539458503/AnsiballZ_stat.py'
Nov 25 09:46:26 compute-0 sudo[230206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:26 compute-0 ceph-mon[74207]: pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:26 compute-0 python3.9[230208]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:26 compute-0 sudo[230206]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:26 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc2c0056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:26 compute-0 sudo[230284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifoheminrjqksajlsoektfxjzpirytwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063986.1309023-956-233752539458503/AnsiballZ_file.py'
Nov 25 09:46:26 compute-0 sudo[230284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:26 compute-0 python3.9[230286]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:26 compute-0 sudo[230284]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:26.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:26.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:26.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:26.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:27 compute-0 sudo[230436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckddwxivvggmimqcyhlaiahoxlarvstr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063986.9272606-956-186563279612718/AnsiballZ_stat.py'
Nov 25 09:46:27 compute-0 sudo[230436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:27 compute-0 python3.9[230438]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:27.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:27 compute-0 sudo[230436]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:46:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:27 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc340c1610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:27 compute-0 sudo[230514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlpgdywwxhooojnucmhfzpwklplouqwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063986.9272606-956-186563279612718/AnsiballZ_file.py'
Nov 25 09:46:27 compute-0 sudo[230514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:27 compute-0 python3.9[230516]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:27 compute-0 sudo[230514]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:27.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:28 compute-0 sudo[230668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jigyqrhsnvzzweydjqadufkjmcspmgct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063987.9394486-1025-68517390454684/AnsiballZ_file.py'
Nov 25 09:46:28 compute-0 sudo[230668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc28006540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:28 compute-0 python3.9[230670]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:28 compute-0 sudo[230668]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:28 compute-0 ceph-mon[74207]: pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:46:28 compute-0 sudo[230820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfgvwnozrsozfxqjjsjlnsaywzbaopxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063988.4702759-1049-259289863279857/AnsiballZ_stat.py'
Nov 25 09:46:28 compute-0 sudo[230820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:28 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc28006540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:28 compute-0 python3.9[230822]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:28 compute-0 sudo[230820]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:29 compute-0 sudo[230898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyahgcjrzysklxlwbfmxwhkbxmvcypas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063988.4702759-1049-259289863279857/AnsiballZ_file.py'
Nov 25 09:46:29 compute-0 sudo[230898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:29 compute-0 python3.9[230900]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:29 compute-0 sudo[230898]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:46:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:29.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:46:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:29 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:29 compute-0 sudo[231050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajpjkqtvnmiglewcxuoklhkvavpiukqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063989.348075-1085-269119610849207/AnsiballZ_stat.py'
Nov 25 09:46:29 compute-0 sudo[231050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:29 compute-0 python3.9[231052]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:29 compute-0 sudo[231050]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:29.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:29 compute-0 sudo[231129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxrasjgtrjaesdzcjvrbetcwylixteoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063989.348075-1085-269119610849207/AnsiballZ_file.py'
Nov 25 09:46:29 compute-0 sudo[231129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:46:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:46:30 compute-0 python3.9[231131]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:30 compute-0 sudo[231129]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:46:30] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 25 09:46:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:46:30] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 25 09:46:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:30 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc340c2320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:30 compute-0 ceph-mon[74207]: pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:46:30 compute-0 sudo[231282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkscedpzofiefnrbtybflnyrtswmajxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063990.3537877-1121-34181041620139/AnsiballZ_systemd.py'
Nov 25 09:46:30 compute-0 sudo[231282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:30 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc340c2320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:30 compute-0 python3.9[231284]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:46:30 compute-0 systemd[1]: Reloading.
Nov 25 09:46:30 compute-0 systemd-sysv-generator[231312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:30 compute-0 systemd-rc-local-generator[231308]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:31 compute-0 sudo[231282]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:31.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:31 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc340c2320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:31 compute-0 sudo[231472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dctaylbpogacjtyligylomescessbxve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063991.3004553-1145-95176981656376/AnsiballZ_stat.py'
Nov 25 09:46:31 compute-0 sudo[231472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:31 compute-0 python3.9[231474]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:31 compute-0 sudo[231472]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:46:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:31.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:46:31 compute-0 sudo[231551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiwtvkbzjxomsybrfktraavdzuhqgwhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063991.3004553-1145-95176981656376/AnsiballZ_file.py'
Nov 25 09:46:31 compute-0 sudo[231551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:31 compute-0 python3.9[231553]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:31 compute-0 sudo[231551]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:32 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc10007510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:32 compute-0 ceph-mon[74207]: pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:32 compute-0 sudo[231704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjhyrdxhxabbqfpbulhgxjszjmpcmstv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063992.2604456-1181-125060978103209/AnsiballZ_stat.py'
Nov 25 09:46:32 compute-0 sudo[231704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:32 compute-0 python3.9[231706]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:32 compute-0 sudo[231704]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:32 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc340c2320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:32 compute-0 sudo[231783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeligsdgdmtsldromtzqxfxucnnijzic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063992.2604456-1181-125060978103209/AnsiballZ_file.py'
Nov 25 09:46:32 compute-0 sudo[231783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:32 compute-0 python3.9[231785]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:32 compute-0 sudo[231783]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:46:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:33.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:46:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:33 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc340c2320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:33 compute-0 sudo[231935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gevyclvspdwovdgkaqpehnnviamivyij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063993.3223453-1217-25680975680631/AnsiballZ_systemd.py'
Nov 25 09:46:33 compute-0 sudo[231935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:33 compute-0 python3.9[231937]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:46:33 compute-0 systemd[1]: Reloading.
Nov 25 09:46:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:33.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:33 compute-0 systemd-rc-local-generator[231959]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:33 compute-0 systemd-sysv-generator[231962]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:34 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 09:46:34 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 09:46:34 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 09:46:34 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 09:46:34 compute-0 sudo[231935]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:34 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc08003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:34 compute-0 ceph-mon[74207]: pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:34 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100013b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:34 compute-0 sudo[232130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmiuqpgyjsvqenbfdkujtrogfcsprmlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063994.5188887-1247-4858397131497/AnsiballZ_file.py'
Nov 25 09:46:34 compute-0 sudo[232130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:34 compute-0 python3.9[232132]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:34 compute-0 sudo[232130]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:35.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:35 compute-0 sudo[232282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhsxjljymnenwbtfblthkipomsjfkpkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063995.091299-1271-276291623775867/AnsiballZ_stat.py'
Nov 25 09:46:35 compute-0 sudo[232282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:35 compute-0 kernel: ganesha.nfsd[224206]: segfault at 50 ip 00007fbcc2bd632e sp 00007fbc917f9210 error 4 in libntirpc.so.5.8[7fbcc2bbb000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 25 09:46:35 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:46:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[216193]: 25/11/2025 09:46:35 : epoch 69257aac : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbc100013b0 fd 38 proxy ignored for local
Nov 25 09:46:35 compute-0 systemd[1]: Started Process Core Dump (PID 232285/UID 0).
Nov 25 09:46:35 compute-0 python3.9[232284]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:35 compute-0 sudo[232282]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:35 compute-0 sudo[232408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epkqbyywbqjaqrttleetixhripjsvkug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063995.091299-1271-276291623775867/AnsiballZ_copy.py'
Nov 25 09:46:35 compute-0 sudo[232408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:35.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:35 compute-0 python3.9[232410]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063995.091299-1271-276291623775867/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:35 compute-0 sudo[232408]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:36 compute-0 systemd-coredump[232286]: Process 216197 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 58:
                                                    #0  0x00007fbcc2bd632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:46:36 compute-0 ceph-mon[74207]: pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:36 compute-0 systemd[1]: systemd-coredump@5-232285-0.service: Deactivated successfully.
Nov 25 09:46:36 compute-0 systemd[1]: systemd-coredump@5-232285-0.service: Consumed 1.015s CPU time.
Nov 25 09:46:36 compute-0 podman[232499]: 2025-11-25 09:46:36.535645352 +0000 UTC m=+0.019621181 container died 6e7e3969d8809a42c5f7fed33a41c74167526f2a14054dd7904f01cc28242822 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:46:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9a969d191129d9317e8ca85d5012fa114bc2c13c5e34dde5a2e71c222d21f81-merged.mount: Deactivated successfully.
Nov 25 09:46:36 compute-0 podman[232499]: 2025-11-25 09:46:36.554445224 +0000 UTC m=+0.038421032 container remove 6e7e3969d8809a42c5f7fed33a41c74167526f2a14054dd7904f01cc28242822 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:46:36 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:46:36 compute-0 sudo[232591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-honycckaqmmofhondyhghvndtprgjrju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063996.4394863-1322-139823797797184/AnsiballZ_file.py'
Nov 25 09:46:36 compute-0 sudo[232591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:36 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:46:36 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Consumed 1.041s CPU time.
Nov 25 09:46:36 compute-0 python3.9[232599]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:46:36 compute-0 sudo[232591]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:36.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:37.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:37.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:37.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:37 compute-0 sudo[232759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aamlbzvddnezgpahrymjvzkcsrovlalu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063997.0116892-1346-81050374202602/AnsiballZ_stat.py'
Nov 25 09:46:37 compute-0 sudo[232759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:37 compute-0 podman[232725]: 2025-11-25 09:46:37.20394301 +0000 UTC m=+0.039214890 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:46:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:37.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:46:37 compute-0 python3.9[232769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:37 compute-0 sudo[232759]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:37 compute-0 sudo[232890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rknqjcdhaczfyhtjknpyapzqzevjurrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063997.0116892-1346-81050374202602/AnsiballZ_copy.py'
Nov 25 09:46:37 compute-0 sudo[232890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:37 compute-0 python3.9[232892]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063997.0116892-1346-81050374202602/.source.json _original_basename=._5b9_5lz follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:37 compute-0 sudo[232890]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:37.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:38 compute-0 sudo[233044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alavslqscwevkhxuaopgrknwlivivatw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063997.980596-1391-171580304077329/AnsiballZ_file.py'
Nov 25 09:46:38 compute-0 sudo[233044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:38 compute-0 python3.9[233046]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:38 compute-0 sudo[233044]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:38 compute-0 ceph-mon[74207]: pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:46:38 compute-0 sudo[233196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jegnliiadzvycpghrmtuiwiwhjmbezfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063998.5568798-1415-101358378829523/AnsiballZ_stat.py'
Nov 25 09:46:38 compute-0 sudo[233196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:38 compute-0 sudo[233196]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:39 compute-0 sudo[233319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqbaaspgmaoqxupcutymvrjlgwiwnjnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063998.5568798-1415-101358378829523/AnsiballZ_copy.py'
Nov 25 09:46:39 compute-0 sudo[233319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:39.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:39 compute-0 sudo[233319]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:39.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:39 compute-0 sudo[233473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruxhfqfcgxggedylsbgkhlvjwqsvmymc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764063999.683359-1466-112549168309290/AnsiballZ_container_config_data.py'
Nov 25 09:46:39 compute-0 sudo[233473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:40 compute-0 python3.9[233475]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 25 09:46:40 compute-0 sudo[233473]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:46:40] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 25 09:46:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:46:40] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 25 09:46:40 compute-0 ceph-mon[74207]: pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:40 compute-0 sudo[233552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:46:40 compute-0 sudo[233552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:46:40 compute-0 sudo[233552]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:40 compute-0 sudo[233650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xshjmhylqhxmdsgtujagptumkayiffol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064000.3819735-1493-139062332878722/AnsiballZ_container_config_hash.py'
Nov 25 09:46:40 compute-0 sudo[233650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:40 compute-0 python3.9[233652]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 09:46:40 compute-0 sudo[233650]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:46:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:41.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:46:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094641 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:46:41 compute-0 sudo[233802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jowprqqjctxhpmofsrrdtghlldmkrfys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064001.1247866-1520-219692400127799/AnsiballZ_podman_container_info.py'
Nov 25 09:46:41 compute-0 sudo[233802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:41 compute-0 python3.9[233804]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 09:46:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:41.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:41 compute-0 sudo[233802]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:42 compute-0 ceph-mon[74207]: pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:46:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:43 compute-0 sudo[233975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gapwgchlrduepwqescpyuxzxtmkmntxc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764064002.8488505-1559-177689464548464/AnsiballZ_edpm_container_manage.py'
Nov 25 09:46:43 compute-0 sudo[233975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:43.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:46:43 compute-0 python3[233977]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 09:46:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:43.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:44 compute-0 ceph-mon[74207]: pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:46:44 compute-0 podman[233988]: 2025-11-25 09:46:44.584184655 +0000 UTC m=+1.139413324 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 09:46:44 compute-0 podman[234037]: 2025-11-25 09:46:44.677822941 +0000 UTC m=+0.027555913 container create 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 09:46:44 compute-0 podman[234037]: 2025-11-25 09:46:44.66436908 +0000 UTC m=+0.014102074 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 09:46:44 compute-0 python3[233977]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 09:46:44 compute-0 sudo[233975]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:46:44
Nov 25 09:46:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:46:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:46:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.nfs', 'default.rgw.meta', '.rgw.root', '.mgr', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms']
Nov 25 09:46:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:46:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:46:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:46:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:46:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:46:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:46:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:46:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:46:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:46:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:46:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:46:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:46:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:46:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:46:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:46:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:46:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:46:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:46:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:46:45 compute-0 sudo[234224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vppwlfjrcyfhlgwlwdxrdakbyudicfop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064004.9143708-1583-8038962072857/AnsiballZ_stat.py'
Nov 25 09:46:45 compute-0 sudo[234224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:45 compute-0 podman[234188]: 2025-11-25 09:46:45.142572923 +0000 UTC m=+0.061238691 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 09:46:45 compute-0 python3.9[234233]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:45.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:45 compute-0 sudo[234224]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:46:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:46:45 compute-0 sudo[234392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwprooqigzabsgwlurhnpvjopcokavrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064005.5877945-1610-46139122212783/AnsiballZ_file.py'
Nov 25 09:46:45 compute-0 sudo[234392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:45.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:45 compute-0 python3.9[234394]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:45 compute-0 sudo[234392]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:46 compute-0 sudo[234469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhyrvgsjpfqcegomuuctwgujulukutpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064005.5877945-1610-46139122212783/AnsiballZ_stat.py'
Nov 25 09:46:46 compute-0 sudo[234469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:46 compute-0 python3.9[234471]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:46 compute-0 sudo[234469]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:46 compute-0 ceph-mon[74207]: pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:46:46 compute-0 sudo[234620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdasggfkqfmdgndlmnkiyfbljaqecwqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064006.2868433-1610-30449075140749/AnsiballZ_copy.py'
Nov 25 09:46:46 compute-0 sudo[234620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:46 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 6.
Nov 25 09:46:46 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:46:46 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Consumed 1.041s CPU time.
Nov 25 09:46:46 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:46:46 compute-0 python3.9[234622]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764064006.2868433-1610-30449075140749/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:46 compute-0 sudo[234620]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:46 compute-0 podman[234661]: 2025-11-25 09:46:46.825495587 +0000 UTC m=+0.028922672 container create 28e7cbc720f140272a2bb8408cfb9649c350e1a48dc77e3a7a76d960f824c2e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487c527c13c533fb16ea73bb88e1a825681e4ddfa25676b990d52be65bd030ad/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487c527c13c533fb16ea73bb88e1a825681e4ddfa25676b990d52be65bd030ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487c527c13c533fb16ea73bb88e1a825681e4ddfa25676b990d52be65bd030ad/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487c527c13c533fb16ea73bb88e1a825681e4ddfa25676b990d52be65bd030ad/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:46 compute-0 podman[234661]: 2025-11-25 09:46:46.870402008 +0000 UTC m=+0.073829123 container init 28e7cbc720f140272a2bb8408cfb9649c350e1a48dc77e3a7a76d960f824c2e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 25 09:46:46 compute-0 podman[234661]: 2025-11-25 09:46:46.874353188 +0000 UTC m=+0.077780284 container start 28e7cbc720f140272a2bb8408cfb9649c350e1a48dc77e3a7a76d960f824c2e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:46:46 compute-0 bash[234661]: 28e7cbc720f140272a2bb8408cfb9649c350e1a48dc77e3a7a76d960f824c2e2
Nov 25 09:46:46 compute-0 podman[234661]: 2025-11-25 09:46:46.813982947 +0000 UTC m=+0.017410063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:46:46 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:46:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:46 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:46:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:46 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:46:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:46 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:46:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:46 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:46:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:46 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:46:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:46 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:46:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:46 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:46:46 compute-0 sudo[234773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhdofhlasvaowgdvgzohfrznfsvdlbvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064006.2868433-1610-30449075140749/AnsiballZ_systemd.py'
Nov 25 09:46:46 compute-0 sudo[234773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:46 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:46:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:46.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:47.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:47.002Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:47.002Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:47 compute-0 python3.9[234790]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:46:47 compute-0 systemd[1]: Reloading.
Nov 25 09:46:47 compute-0 systemd-rc-local-generator[234811]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:47 compute-0 systemd-sysv-generator[234814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:47.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:47 compute-0 sudo[234773]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:47 compute-0 sudo[234900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxxfjuyyjmrretdsbcjhpdvecnlelzyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064006.2868433-1610-30449075140749/AnsiballZ_systemd.py'
Nov 25 09:46:47 compute-0 sudo[234900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:47.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:47 compute-0 python3.9[234902]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:46:47 compute-0 systemd[1]: Reloading.
Nov 25 09:46:48 compute-0 systemd-rc-local-generator[234926]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:48 compute-0 systemd-sysv-generator[234929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:48 compute-0 systemd[1]: Starting multipathd container...
Nov 25 09:46:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c6f2b313d2d7c42ae8d0e765ca37cee963ac6c2dea9b519f74163851880724/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c6f2b313d2d7c42ae8d0e765ca37cee963ac6c2dea9b519f74163851880724/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:48 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e.
Nov 25 09:46:48 compute-0 podman[234944]: 2025-11-25 09:46:48.288646701 +0000 UTC m=+0.075707265 container init 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:46:48 compute-0 multipathd[234956]: + sudo -E kolla_set_configs
Nov 25 09:46:48 compute-0 sudo[234962]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 09:46:48 compute-0 sudo[234962]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 09:46:48 compute-0 sudo[234962]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 09:46:48 compute-0 podman[234944]: 2025-11-25 09:46:48.310944349 +0000 UTC m=+0.098004913 container start 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 09:46:48 compute-0 podman[234944]: multipathd
Nov 25 09:46:48 compute-0 systemd[1]: Started multipathd container.
Nov 25 09:46:48 compute-0 multipathd[234956]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 09:46:48 compute-0 multipathd[234956]: INFO:__main__:Validating config file
Nov 25 09:46:48 compute-0 sudo[234900]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:48 compute-0 multipathd[234956]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 09:46:48 compute-0 multipathd[234956]: INFO:__main__:Writing out command to execute
Nov 25 09:46:48 compute-0 sudo[234962]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:48 compute-0 multipathd[234956]: ++ cat /run_command
Nov 25 09:46:48 compute-0 multipathd[234956]: + CMD='/usr/sbin/multipathd -d'
Nov 25 09:46:48 compute-0 multipathd[234956]: + ARGS=
Nov 25 09:46:48 compute-0 multipathd[234956]: + sudo kolla_copy_cacerts
Nov 25 09:46:48 compute-0 sudo[234976]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 09:46:48 compute-0 sudo[234976]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 09:46:48 compute-0 sudo[234976]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 09:46:48 compute-0 sudo[234976]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:48 compute-0 multipathd[234956]: + [[ ! -n '' ]]
Nov 25 09:46:48 compute-0 multipathd[234956]: + . kolla_extend_start
Nov 25 09:46:48 compute-0 multipathd[234956]: Running command: '/usr/sbin/multipathd -d'
Nov 25 09:46:48 compute-0 multipathd[234956]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 09:46:48 compute-0 multipathd[234956]: + umask 0022
Nov 25 09:46:48 compute-0 multipathd[234956]: + exec /usr/sbin/multipathd -d
Nov 25 09:46:48 compute-0 multipathd[234956]: 2788.994400 | --------start up--------
Nov 25 09:46:48 compute-0 multipathd[234956]: 2788.994411 | read /etc/multipath.conf
Nov 25 09:46:48 compute-0 multipathd[234956]: 2788.998716 | path checkers start up
Nov 25 09:46:48 compute-0 podman[234963]: 2025-11-25 09:46:48.398799169 +0000 UTC m=+0.080784058 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:46:48 compute-0 systemd[1]: 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e-66aa7537a161a1b7.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 09:46:48 compute-0 systemd[1]: 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e-66aa7537a161a1b7.service: Failed with result 'exit-code'.
Nov 25 09:46:48 compute-0 ceph-mon[74207]: pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:46:48 compute-0 python3.9[235142]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:46:49 compute-0 sudo[235294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnuxcizqnoamqrdfzptskrhxvwuzptzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064009.051268-1718-175839451714317/AnsiballZ_command.py'
Nov 25 09:46:49 compute-0 sudo[235294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:49.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:46:49 compute-0 python3.9[235296]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:46:49 compute-0 sudo[235294]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:49.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:49 compute-0 sudo[235456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vptltrcrbhmoplznqblsjeovfomvmydn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064009.603865-1742-83817237645115/AnsiballZ_systemd.py'
Nov 25 09:46:49 compute-0 sudo[235456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:50 compute-0 python3.9[235458]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:46:50 compute-0 systemd[1]: Stopping multipathd container...
Nov 25 09:46:50 compute-0 multipathd[234956]: 2790.736259 | exit (signal)
Nov 25 09:46:50 compute-0 multipathd[234956]: 2790.736479 | --------shut down-------
Nov 25 09:46:50 compute-0 systemd[1]: libpod-4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e.scope: Deactivated successfully.
Nov 25 09:46:50 compute-0 podman[235463]: 2025-11-25 09:46:50.139963972 +0000 UTC m=+0.047053692 container died 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 09:46:50 compute-0 systemd[1]: 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e-66aa7537a161a1b7.timer: Deactivated successfully.
Nov 25 09:46:50 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e.
Nov 25 09:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e-userdata-shm.mount: Deactivated successfully.
Nov 25 09:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8c6f2b313d2d7c42ae8d0e765ca37cee963ac6c2dea9b519f74163851880724-merged.mount: Deactivated successfully.
Nov 25 09:46:50 compute-0 podman[235463]: 2025-11-25 09:46:50.227877973 +0000 UTC m=+0.134967684 container cleanup 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 09:46:50 compute-0 podman[235463]: multipathd
Nov 25 09:46:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:46:50] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 25 09:46:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:46:50] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 25 09:46:50 compute-0 podman[235485]: multipathd
Nov 25 09:46:50 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 25 09:46:50 compute-0 systemd[1]: Stopped multipathd container.
Nov 25 09:46:50 compute-0 systemd[1]: Starting multipathd container...
Nov 25 09:46:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c6f2b313d2d7c42ae8d0e765ca37cee963ac6c2dea9b519f74163851880724/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c6f2b313d2d7c42ae8d0e765ca37cee963ac6c2dea9b519f74163851880724/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 09:46:50 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e.
Nov 25 09:46:50 compute-0 podman[235495]: 2025-11-25 09:46:50.370101768 +0000 UTC m=+0.079134407 container init 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 09:46:50 compute-0 multipathd[235507]: + sudo -E kolla_set_configs
Nov 25 09:46:50 compute-0 sudo[235513]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 09:46:50 compute-0 sudo[235513]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 09:46:50 compute-0 sudo[235513]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 09:46:50 compute-0 podman[235495]: 2025-11-25 09:46:50.404316798 +0000 UTC m=+0.113349418 container start 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 09:46:50 compute-0 podman[235495]: multipathd
Nov 25 09:46:50 compute-0 systemd[1]: Started multipathd container.
Nov 25 09:46:50 compute-0 multipathd[235507]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 09:46:50 compute-0 multipathd[235507]: INFO:__main__:Validating config file
Nov 25 09:46:50 compute-0 multipathd[235507]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 09:46:50 compute-0 multipathd[235507]: INFO:__main__:Writing out command to execute
Nov 25 09:46:50 compute-0 sudo[235513]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:50 compute-0 multipathd[235507]: ++ cat /run_command
Nov 25 09:46:50 compute-0 multipathd[235507]: + CMD='/usr/sbin/multipathd -d'
Nov 25 09:46:50 compute-0 multipathd[235507]: + ARGS=
Nov 25 09:46:50 compute-0 multipathd[235507]: + sudo kolla_copy_cacerts
Nov 25 09:46:50 compute-0 sudo[235456]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:50 compute-0 sudo[235529]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 09:46:50 compute-0 sudo[235529]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 09:46:50 compute-0 sudo[235529]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 09:46:50 compute-0 sudo[235529]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:50 compute-0 podman[235514]: 2025-11-25 09:46:50.450319156 +0000 UTC m=+0.045387720 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 09:46:50 compute-0 multipathd[235507]: + [[ ! -n '' ]]
Nov 25 09:46:50 compute-0 multipathd[235507]: + . kolla_extend_start
Nov 25 09:46:50 compute-0 multipathd[235507]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 09:46:50 compute-0 multipathd[235507]: Running command: '/usr/sbin/multipathd -d'
Nov 25 09:46:50 compute-0 multipathd[235507]: + umask 0022
Nov 25 09:46:50 compute-0 multipathd[235507]: + exec /usr/sbin/multipathd -d
Nov 25 09:46:50 compute-0 ceph-mon[74207]: pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:46:50 compute-0 systemd[1]: 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e-168f8f525acc0b31.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 09:46:50 compute-0 systemd[1]: 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e-168f8f525acc0b31.service: Failed with result 'exit-code'.
Nov 25 09:46:50 compute-0 multipathd[235507]: 2791.081774 | --------start up--------
Nov 25 09:46:50 compute-0 multipathd[235507]: 2791.081847 | read /etc/multipath.conf
Nov 25 09:46:50 compute-0 multipathd[235507]: 2791.085763 | path checkers start up
Nov 25 09:46:50 compute-0 sudo[235693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysrzuxgzyweaflxuukfogzprzfokkufp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064010.6587894-1766-206069109706486/AnsiballZ_file.py'
Nov 25 09:46:50 compute-0 sudo[235693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:50 compute-0 python3.9[235695]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:51 compute-0 sudo[235693]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:51.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:46:51 compute-0 sudo[235846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejhqrymtwkhukeualmqpstrvozasymnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064011.5400853-1802-80804780635088/AnsiballZ_file.py'
Nov 25 09:46:51 compute-0 sudo[235846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:51.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:51 compute-0 python3.9[235848]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 09:46:51 compute-0 sudo[235846]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:52 compute-0 sudo[235999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyjjyngtqnqfgusmtntedwphpygtfjhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064012.093308-1826-156633164218165/AnsiballZ_modprobe.py'
Nov 25 09:46:52 compute-0 sudo[235999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:52 compute-0 python3.9[236001]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 25 09:46:52 compute-0 kernel: Key type psk registered
Nov 25 09:46:52 compute-0 ceph-mon[74207]: pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:46:52 compute-0 sudo[235999]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:52 compute-0 sudo[236161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkezvenywljqdrwnpsedpufiaznozsbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064012.6482008-1850-230256869386184/AnsiballZ_stat.py'
Nov 25 09:46:52 compute-0 sudo[236161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:52 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:46:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:52 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:46:52 compute-0 python3.9[236163]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:46:52 compute-0 sudo[236161]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:53 compute-0 sudo[236284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdpymphpxhpckfapapyymdlvxeuznzpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064012.6482008-1850-230256869386184/AnsiballZ_copy.py'
Nov 25 09:46:53 compute-0 sudo[236284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:53.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:46:53 compute-0 python3.9[236286]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764064012.6482008-1850-230256869386184/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:53 compute-0 sudo[236284]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:46:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:53.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:46:53 compute-0 sudo[236437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwgwfbtfcsdtjeiduhtxxlddugrbjilx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064013.7037919-1898-19392693558209/AnsiballZ_lineinfile.py'
Nov 25 09:46:53 compute-0 sudo[236437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:54 compute-0 python3.9[236440]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:46:54 compute-0 sudo[236437]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:54 compute-0 sudo[236590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccmzoemyxmllkvdtmrtkoldmmjgcgygu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064014.2601857-1922-44484672750533/AnsiballZ_systemd.py'
Nov 25 09:46:54 compute-0 sudo[236590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:54 compute-0 ceph-mon[74207]: pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:46:54 compute-0 python3.9[236592]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:46:54 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 09:46:54 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 25 09:46:54 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 25 09:46:54 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 09:46:54 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 09:46:54 compute-0 sudo[236590]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:55 compute-0 sudo[236746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pefduhqvfalrmniryysjwtpqgwkmiffs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064015.0140173-1946-94275751710884/AnsiballZ_dnf.py'
Nov 25 09:46:55 compute-0 sudo[236746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:46:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:55.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:46:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:46:55 compute-0 python3.9[236748]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 09:46:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:55.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:56 compute-0 ceph-mon[74207]: pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:46:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:56.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:57.002Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:57.002Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:46:57.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:46:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:46:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:57.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:46:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:46:57 compute-0 systemd[1]: Reloading.
Nov 25 09:46:57 compute-0 systemd-rc-local-generator[236776]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:57 compute-0 systemd-sysv-generator[236779]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:57 compute-0 systemd[1]: Reloading.
Nov 25 09:46:57 compute-0 systemd-rc-local-generator[236814]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:57 compute-0 systemd-sysv-generator[236824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:46:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:57.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:46:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:46:58 compute-0 systemd-logind[744]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 09:46:58 compute-0 systemd-logind[744]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 09:46:58 compute-0 lvm[236863]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:46:58 compute-0 lvm[236863]: VG ceph_vg0 finished
Nov 25 09:46:58 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:46:58 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:46:58 compute-0 systemd[1]: Reloading.
Nov 25 09:46:58 compute-0 systemd-rc-local-generator[236925]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:46:58 compute-0 systemd-sysv-generator[236928]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:46:58 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:46:58 compute-0 ceph-mon[74207]: pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:46:58 compute-0 sudo[236746]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:46:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:58 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:46:59 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:46:59 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:46:59 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.076s CPU time.
Nov 25 09:46:59 compute-0 systemd[1]: run-r1c113cded0e248f4ac0cd61f4c53f228.service: Deactivated successfully.
Nov 25 09:46:59 compute-0 sudo[238235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvvlczqnvzlxbpkdnngibobkouonmbor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064019.0019624-1970-248141555388990/AnsiballZ_systemd_service.py'
Nov 25 09:46:59 compute-0 sudo[238235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:46:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:46:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:59.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:46:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:46:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:46:59 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:46:59 compute-0 python3.9[238237]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:46:59 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 25 09:46:59 compute-0 iscsid[225768]: iscsid shutting down.
Nov 25 09:46:59 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 25 09:46:59 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 25 09:46:59 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 09:46:59 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 25 09:46:59 compute-0 systemd[1]: Started Open-iSCSI.
Nov 25 09:46:59 compute-0 sudo[238235]: pam_unix(sudo:session): session closed for user root
Nov 25 09:46:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:46:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:46:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:59.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:46:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:46:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:47:00 compute-0 python3.9[238394]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:47:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:47:00] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:47:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:47:00] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:47:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:00 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc001e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:00 compute-0 ceph-mon[74207]: pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:47:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:47:00 compute-0 sudo[238427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:47:00 compute-0 sudo[238427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:00 compute-0 sudo[238427]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:00 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0001910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:00 compute-0 sudo[238573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwyhwdqnxslouzqidlehfjpqzzaxetnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064020.573335-2022-152450562169009/AnsiballZ_file.py'
Nov 25 09:47:00 compute-0 sudo[238573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:00 compute-0 python3.9[238575]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:00 compute-0 sudo[238573]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:47:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:01.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:47:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:47:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094701 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:47:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:01 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c4003410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:01 compute-0 sudo[238725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgqghnwgfgiwueobpjtswbtarmgxgtbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064021.3868725-2055-21010475168504/AnsiballZ_systemd_service.py'
Nov 25 09:47:01 compute-0 sudo[238725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:01.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:01 compute-0 python3.9[238727]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:47:01 compute-0 systemd[1]: Reloading.
Nov 25 09:47:01 compute-0 systemd-rc-local-generator[238753]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:47:01 compute-0 systemd-sysv-generator[238757]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:47:02 compute-0 sudo[238725]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:02 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:02 compute-0 ceph-mon[74207]: pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:47:02 compute-0 python3.9[238914]: ansible-ansible.builtin.service_facts Invoked
Nov 25 09:47:02 compute-0 network[238931]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 09:47:02 compute-0 network[238932]: 'network-scripts' will be removed from distribution in near future.
Nov 25 09:47:02 compute-0 network[238933]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 09:47:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:02 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc002980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:03.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:47:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:03 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0002250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:03.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094704 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:47:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:04 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c4003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:04 compute-0 ceph-mon[74207]: pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:47:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:04 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c4003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:05 compute-0 sudo[239208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efvgvcaziodmoxsiyhvoxbosqxxupaod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064025.0327718-2112-172436399601110/AnsiballZ_systemd_service.py'
Nov 25 09:47:05 compute-0 sudo[239208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:05.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:47:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:47:05.375 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:47:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:47:05.375 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:47:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:47:05.375 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:47:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:05 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c80025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:05 compute-0 python3.9[239210]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:05 compute-0 sudo[239208]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:05.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:05 compute-0 sudo[239362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzkmksptaygbdyydsxywehjahgibbnpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064025.6308656-2112-196848043307855/AnsiballZ_systemd_service.py'
Nov 25 09:47:05 compute-0 sudo[239362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:06 compute-0 python3.9[239364]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:06 compute-0 sudo[239362]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:06 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc0032a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:06 compute-0 sudo[239516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weutfeahyhdkkwgkcailjwkohvojbjtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064026.1875968-2112-90906304910648/AnsiballZ_systemd_service.py'
Nov 25 09:47:06 compute-0 sudo[239516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:06 compute-0 ceph-mon[74207]: pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:47:06 compute-0 python3.9[239518]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:06 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0002250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:06 compute-0 sudo[239516]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:06 compute-0 sudo[239669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pajoudttwcbuyavvepcstrcvzhvpnbpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064026.7908456-2112-62435378426501/AnsiballZ_systemd_service.py'
Nov 25 09:47:06 compute-0 sudo[239669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:06.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:06.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:07.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:07.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:07 compute-0 python3.9[239671]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:07 compute-0 sudo[239669]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:47:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:07.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:47:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:47:07 compute-0 podman[239673]: 2025-11-25 09:47:07.338606544 +0000 UTC m=+0.072585276 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 25 09:47:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:07 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c4003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:07 compute-0 sudo[239838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csblqvabpgpurkjlmpjwiiuihlrxwuck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064027.376177-2112-160710375673359/AnsiballZ_systemd_service.py'
Nov 25 09:47:07 compute-0 sudo[239838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:07.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:07 compute-0 python3.9[239840]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:07 compute-0 sudo[239838]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:08 compute-0 sudo[239993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dztbsbozqluahpfyvcetfjmjdcsvwphx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064027.9574218-2112-66787711901580/AnsiballZ_systemd_service.py'
Nov 25 09:47:08 compute-0 sudo[239993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:08 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c80025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:08 compute-0 python3.9[239995]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:08 compute-0 sudo[239993]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:08 compute-0 ceph-mon[74207]: pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:47:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:08 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc0032a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:08 compute-0 sudo[240146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pobjmgkypfhagwbxjaeuktmrzygrchtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064028.521795-2112-112253081208882/AnsiballZ_systemd_service.py'
Nov 25 09:47:08 compute-0 sudo[240146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:08 compute-0 python3.9[240148]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:08 compute-0 sudo[240146]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:09 compute-0 sudo[240299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ileftvztuprctjruqpkqnrblfaicbyea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064029.07886-2112-40162425520215/AnsiballZ_systemd_service.py'
Nov 25 09:47:09 compute-0 sudo[240299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:47:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:47:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:09.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:47:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:09 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d0002250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:09 compute-0 python3.9[240301]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:47:09 compute-0 sudo[240299]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:09.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:10 compute-0 sudo[240454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybpuehgtqaxsbhwhziqqaldlfqezusvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064029.91388-2289-222627696160649/AnsiballZ_file.py'
Nov 25 09:47:10 compute-0 sudo[240454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:47:10] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:47:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:47:10] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:47:10 compute-0 python3.9[240456]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:10 compute-0 sudo[240454]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:10 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c4004e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:10 compute-0 ceph-mon[74207]: pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:47:10 compute-0 sudo[240606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hszhajtnnkyuwwwynzpojegsezlsjkcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064030.3778393-2289-111450862495080/AnsiballZ_file.py'
Nov 25 09:47:10 compute-0 sudo[240606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:10 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c80032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:10 compute-0 python3.9[240608]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:10 compute-0 sudo[240606]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:11 compute-0 sudo[240758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgvtbmudkmcmqdihrnnhpwafxkmgljan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064030.838759-2289-77081846940144/AnsiballZ_file.py'
Nov 25 09:47:11 compute-0 sudo[240758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:11 compute-0 python3.9[240760]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:11 compute-0 sudo[240758]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:11.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:11 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc0032a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:11 compute-0 sudo[240910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djnmzsuupivbjdwvfxtofvnwtmmmlllu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064031.2855036-2289-255080164245605/AnsiballZ_file.py'
Nov 25 09:47:11 compute-0 sudo[240910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:11 compute-0 python3.9[240912]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:11 compute-0 sudo[240910]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:11 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:47:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:11.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:11 compute-0 sudo[241063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztxpdxrcjuvcwdnmvmgbaxamgvlkbwdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064031.714013-2289-113903823504082/AnsiballZ_file.py'
Nov 25 09:47:11 compute-0 sudo[241063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:12 compute-0 python3.9[241065]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:12 compute-0 sudo[241063]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:12 compute-0 sudo[241216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgaxnspyxwonokgsqdfcnmnpatewrcap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064032.1416204-2289-98817156433799/AnsiballZ_file.py'
Nov 25 09:47:12 compute-0 sudo[241216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:12 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d00095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:12 compute-0 python3.9[241218]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:12 compute-0 sudo[241216]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:12 compute-0 ceph-mon[74207]: pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:12 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c4006820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:12 compute-0 sudo[241368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnrmmedwkhxbjiyjsxfhfxoqlphgcgnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064032.5755806-2289-205819906708035/AnsiballZ_file.py'
Nov 25 09:47:12 compute-0 sudo[241368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:12 compute-0 python3.9[241370]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:12 compute-0 sudo[241368]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:13 compute-0 sudo[241520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aviobwdvsoavuxccdlzjvlfvaqlhbyuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064032.997805-2289-106020359494686/AnsiballZ_file.py'
Nov 25 09:47:13 compute-0 sudo[241520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:13.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:13 compute-0 python3.9[241522]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:13 compute-0 sudo[241520]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:13 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c80032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:13 compute-0 sudo[241673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-munpfqjkrkectpwtblqbhqcdvprpdmux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064033.610395-2460-194371103309161/AnsiballZ_file.py'
Nov 25 09:47:13 compute-0 sudo[241673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:13.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:13 compute-0 python3.9[241675]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:13 compute-0 sudo[241673]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:14 compute-0 sudo[241826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrxncusosikkvkxcwyubxdviovsrejgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064034.0472732-2460-8744599125750/AnsiballZ_file.py'
Nov 25 09:47:14 compute-0 sudo[241826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:14 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc0032a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:14 compute-0 python3.9[241828]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:14 compute-0 sudo[241826]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:14 compute-0 ceph-mon[74207]: pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:14 compute-0 sudo[241978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfhlsfrsrwqpuctwgqkemhdvtzpbzhha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064034.4911861-2460-5165835398929/AnsiballZ_file.py'
Nov 25 09:47:14 compute-0 sudo[241978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:14 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d00095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:14 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:47:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:14 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:47:14 compute-0 python3.9[241980]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:14 compute-0 sudo[241978]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:47:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:47:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:47:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:47:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:47:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:47:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:47:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:47:15 compute-0 sudo[242130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ateypxbcavemxpbhieqkwotoospwvoqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064034.9374979-2460-31713191399411/AnsiballZ_file.py'
Nov 25 09:47:15 compute-0 sudo[242130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:15 compute-0 python3.9[242132]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:15 compute-0 sudo[242130]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:15.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:15 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c4008420 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:47:15 compute-0 sudo[242291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ursdguvwybzslwhblwbdfqgyygayyzeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064035.3731043-2460-280792068485312/AnsiballZ_file.py'
Nov 25 09:47:15 compute-0 sudo[242291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:15 compute-0 podman[242256]: 2025-11-25 09:47:15.584563493 +0000 UTC m=+0.059722101 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:47:15 compute-0 python3.9[242300]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:15 compute-0 sudo[242291]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:15.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:16 compute-0 sudo[242459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzajjytyifrqjboasihmwghulkjlojih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064035.827189-2460-188994307388319/AnsiballZ_file.py'
Nov 25 09:47:16 compute-0 sudo[242459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:16 compute-0 python3.9[242461]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:16 compute-0 sudo[242459]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:16 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:16 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 25 09:47:16 compute-0 sudo[242612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfzxlyqjhdblqdsxuzjtqxcekltbmrxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064036.2710228-2460-39801307216562/AnsiballZ_file.py'
Nov 25 09:47:16 compute-0 sudo[242612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:16 compute-0 ceph-mon[74207]: pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:16 compute-0 python3.9[242614]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:16 compute-0 sudo[242612]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:16 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc0032a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:16 compute-0 sudo[242764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axbinwdbgrulhbtcsjpxgscrpfoehupd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064036.7057848-2460-251594325369561/AnsiballZ_file.py'
Nov 25 09:47:16 compute-0 sudo[242764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:16.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:17.002Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:17.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:17.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:17 compute-0 python3.9[242766]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:17 compute-0 sudo[242764]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:17 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 09:47:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:17.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:17 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d00095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:17 compute-0 sudo[242917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iucbqwfgyxdbeebulbwzvfdeipahvtrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064037.3983712-2634-53954231496162/AnsiballZ_command.py'
Nov 25 09:47:17 compute-0 sudo[242917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:17 compute-0 python3.9[242919]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:17 compute-0 sudo[242917]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:17 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:47:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:17.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:18 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c40069a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:18 compute-0 python3.9[243073]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 09:47:18 compute-0 ceph-mon[74207]: pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:18 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:18 compute-0 sudo[243223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtudspdwrnswxwkhzhsmgqbiizzsmqly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064038.6601927-2688-240860644587157/AnsiballZ_systemd_service.py'
Nov 25 09:47:18 compute-0 sudo[243223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:19 compute-0 python3.9[243225]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:47:19 compute-0 systemd[1]: Reloading.
Nov 25 09:47:19 compute-0 systemd-rc-local-generator[243246]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:47:19 compute-0 systemd-sysv-generator[243249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:47:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:19.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:19 compute-0 sudo[243223]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:19 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23bc0032a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:19 compute-0 sudo[243410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyoacxqjtjuewfuhqrnvfertonnwzfoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064039.5622516-2712-40895454942300/AnsiballZ_command.py'
Nov 25 09:47:19 compute-0 sudo[243410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:47:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:19.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:47:19 compute-0 python3.9[243412]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:19 compute-0 sudo[243410]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:20 compute-0 sudo[243564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltrwwugvbqsfngxwelmzjsgjjnzhwcxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064040.012834-2712-225780584074166/AnsiballZ_command.py'
Nov 25 09:47:20 compute-0 sudo[243564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:47:20] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:47:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:47:20] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:47:20 compute-0 python3.9[243566]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:20 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23d000aa30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:20 compute-0 sudo[243564]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:20 compute-0 ceph-mon[74207]: pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:20 compute-0 sudo[243749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvzzlmwgdnxyiprxgypdpwhbzrxorvwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064040.4510202-2712-232661248622228/AnsiballZ_command.py'
Nov 25 09:47:20 compute-0 sudo[243693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:47:20 compute-0 sudo[243693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:20 compute-0 sudo[243749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:20 compute-0 sudo[243693]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:20 compute-0 podman[243691]: 2025-11-25 09:47:20.650799434 +0000 UTC m=+0.045636959 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 25 09:47:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:20 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c40069a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:20 compute-0 python3.9[243761]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:20 compute-0 sudo[243749]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:21 compute-0 sudo[243914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzqltoimgwlkykpmsiadodyipdsrdpai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064040.9235628-2712-223487038834886/AnsiballZ_command.py'
Nov 25 09:47:21 compute-0 sudo[243914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:21 compute-0 python3.9[243916]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:21 compute-0 sudo[243914]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:47:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:47:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:21.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:47:21 compute-0 kernel: ganesha.nfsd[237973]: segfault at 50 ip 00007f247ac6732e sp 00007f243b7fd210 error 4 in libntirpc.so.5.8[7f247ac4c000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 25 09:47:21 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:47:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[234696]: 25/11/2025 09:47:21 : epoch 69257b06 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c8003fe0 fd 39 proxy ignored for local
Nov 25 09:47:21 compute-0 systemd[1]: Started Process Core Dump (PID 243994/UID 0).
Nov 25 09:47:21 compute-0 sudo[244019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:47:21 compute-0 sudo[244019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:21 compute-0 sudo[244019]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:21 compute-0 sudo[244068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:47:21 compute-0 sudo[244068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:21 compute-0 sudo[244119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-najmpdabwywkbvgoucoaeyqdhmqtaqfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064041.3599014-2712-45411369721814/AnsiballZ_command.py'
Nov 25 09:47:21 compute-0 sudo[244119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:21 compute-0 python3.9[244121]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:21 compute-0 sudo[244119]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:21.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:21 compute-0 sudo[244068]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:22 compute-0 sudo[244301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lptxhslpcrrrezeslzlglfpftvfkpucr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064041.8667595-2712-148804149436135/AnsiballZ_command.py'
Nov 25 09:47:22 compute-0 sudo[244301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:22 compute-0 python3.9[244303]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:22 compute-0 sudo[244301]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:22 compute-0 systemd-coredump[243995]: Process 234717 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 47:
                                                    #0  0x00007f247ac6732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:47:22 compute-0 systemd[1]: systemd-coredump@6-243994-0.service: Deactivated successfully.
Nov 25 09:47:22 compute-0 sudo[244464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlytgshvzpifeywcwpnqsgqugbjgyjlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064042.341977-2712-203700026523279/AnsiballZ_command.py'
Nov 25 09:47:22 compute-0 sudo[244464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:22 compute-0 podman[244441]: 2025-11-25 09:47:22.52352785 +0000 UTC m=+0.021982114 container died 28e7cbc720f140272a2bb8408cfb9649c350e1a48dc77e3a7a76d960f824c2e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:47:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-487c527c13c533fb16ea73bb88e1a825681e4ddfa25676b990d52be65bd030ad-merged.mount: Deactivated successfully.
Nov 25 09:47:22 compute-0 podman[244441]: 2025-11-25 09:47:22.551519034 +0000 UTC m=+0.049973297 container remove 28e7cbc720f140272a2bb8408cfb9649c350e1a48dc77e3a7a76d960f824c2e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:47:22 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:47:22 compute-0 ceph-mon[74207]: pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:47:22 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:47:22 compute-0 python3.9[244469]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:22 compute-0 sudo[244464]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:47:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:22 compute-0 sudo[244644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnqezdrhwsvqiugykesksfapmneghmgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064042.7976859-2712-21107050984661/AnsiballZ_command.py'
Nov 25 09:47:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:47:22 compute-0 sudo[244644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:22 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:23 compute-0 python3.9[244646]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:47:23 compute-0 sudo[244644]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:23.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:47:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:47:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:47:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:47:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:47:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:47:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:47:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:47:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:47:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:47:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:47:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:47:23 compute-0 sudo[244672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:47:23 compute-0 sudo[244672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:23 compute-0 sudo[244672]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:23 compute-0 sudo[244697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:47:23 compute-0 sudo[244697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:23.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:23 compute-0 podman[244755]: 2025-11-25 09:47:23.832546943 +0000 UTC m=+0.028788839 container create 3ffe9a2eb087d19d5b2217f907a3f8eaf5589e6d6669200c5e164defb867bec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shamir, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:47:23 compute-0 systemd[1]: Started libpod-conmon-3ffe9a2eb087d19d5b2217f907a3f8eaf5589e6d6669200c5e164defb867bec4.scope.
Nov 25 09:47:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:47:23 compute-0 podman[244755]: 2025-11-25 09:47:23.884061767 +0000 UTC m=+0.080303682 container init 3ffe9a2eb087d19d5b2217f907a3f8eaf5589e6d6669200c5e164defb867bec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shamir, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:47:23 compute-0 podman[244755]: 2025-11-25 09:47:23.88843841 +0000 UTC m=+0.084680305 container start 3ffe9a2eb087d19d5b2217f907a3f8eaf5589e6d6669200c5e164defb867bec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shamir, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:47:23 compute-0 podman[244755]: 2025-11-25 09:47:23.889490975 +0000 UTC m=+0.085732891 container attach 3ffe9a2eb087d19d5b2217f907a3f8eaf5589e6d6669200c5e164defb867bec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:47:23 compute-0 elated_shamir[244768]: 167 167
Nov 25 09:47:23 compute-0 systemd[1]: libpod-3ffe9a2eb087d19d5b2217f907a3f8eaf5589e6d6669200c5e164defb867bec4.scope: Deactivated successfully.
Nov 25 09:47:23 compute-0 podman[244755]: 2025-11-25 09:47:23.893221319 +0000 UTC m=+0.089463214 container died 3ffe9a2eb087d19d5b2217f907a3f8eaf5589e6d6669200c5e164defb867bec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:47:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-55d50d7e880b0ab8e45c38ef555035541b06dc993eb1439d4ab2e9195fa2b060-merged.mount: Deactivated successfully.
Nov 25 09:47:23 compute-0 podman[244755]: 2025-11-25 09:47:23.915740184 +0000 UTC m=+0.111982079 container remove 3ffe9a2eb087d19d5b2217f907a3f8eaf5589e6d6669200c5e164defb867bec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:47:23 compute-0 podman[244755]: 2025-11-25 09:47:23.821787945 +0000 UTC m=+0.018029840 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:47:23 compute-0 systemd[1]: libpod-conmon-3ffe9a2eb087d19d5b2217f907a3f8eaf5589e6d6669200c5e164defb867bec4.scope: Deactivated successfully.
Nov 25 09:47:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:23 compute-0 ceph-mon[74207]: pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:47:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:47:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:47:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:47:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:47:24 compute-0 podman[244791]: 2025-11-25 09:47:24.037690739 +0000 UTC m=+0.028831169 container create 5d8fd223da3af0b8c0dae8ad7c833ced40a0bbf5ab3d77e01338fdf250ff97df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 09:47:24 compute-0 systemd[1]: Started libpod-conmon-5d8fd223da3af0b8c0dae8ad7c833ced40a0bbf5ab3d77e01338fdf250ff97df.scope.
Nov 25 09:47:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f81afa52fbba04ddf7e23e9edb53130e87d6af66d6d158f1b3487f89211789/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f81afa52fbba04ddf7e23e9edb53130e87d6af66d6d158f1b3487f89211789/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f81afa52fbba04ddf7e23e9edb53130e87d6af66d6d158f1b3487f89211789/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f81afa52fbba04ddf7e23e9edb53130e87d6af66d6d158f1b3487f89211789/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f81afa52fbba04ddf7e23e9edb53130e87d6af66d6d158f1b3487f89211789/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:24 compute-0 podman[244791]: 2025-11-25 09:47:24.092409514 +0000 UTC m=+0.083549944 container init 5d8fd223da3af0b8c0dae8ad7c833ced40a0bbf5ab3d77e01338fdf250ff97df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:47:24 compute-0 podman[244791]: 2025-11-25 09:47:24.098628431 +0000 UTC m=+0.089768861 container start 5d8fd223da3af0b8c0dae8ad7c833ced40a0bbf5ab3d77e01338fdf250ff97df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_noyce, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:47:24 compute-0 podman[244791]: 2025-11-25 09:47:24.099810311 +0000 UTC m=+0.090950740 container attach 5d8fd223da3af0b8c0dae8ad7c833ced40a0bbf5ab3d77e01338fdf250ff97df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_noyce, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 09:47:24 compute-0 podman[244791]: 2025-11-25 09:47:24.02644879 +0000 UTC m=+0.017589230 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:47:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094724 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:47:24 compute-0 hopeful_noyce[244804]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:47:24 compute-0 hopeful_noyce[244804]: --> All data devices are unavailable
Nov 25 09:47:24 compute-0 systemd[1]: libpod-5d8fd223da3af0b8c0dae8ad7c833ced40a0bbf5ab3d77e01338fdf250ff97df.scope: Deactivated successfully.
Nov 25 09:47:24 compute-0 podman[244791]: 2025-11-25 09:47:24.35918484 +0000 UTC m=+0.350325270 container died 5d8fd223da3af0b8c0dae8ad7c833ced40a0bbf5ab3d77e01338fdf250ff97df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 09:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9f81afa52fbba04ddf7e23e9edb53130e87d6af66d6d158f1b3487f89211789-merged.mount: Deactivated successfully.
Nov 25 09:47:24 compute-0 podman[244791]: 2025-11-25 09:47:24.380533317 +0000 UTC m=+0.371673747 container remove 5d8fd223da3af0b8c0dae8ad7c833ced40a0bbf5ab3d77e01338fdf250ff97df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:47:24 compute-0 systemd[1]: libpod-conmon-5d8fd223da3af0b8c0dae8ad7c833ced40a0bbf5ab3d77e01338fdf250ff97df.scope: Deactivated successfully.
Nov 25 09:47:24 compute-0 sudo[244697]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:24 compute-0 sudo[244925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:47:24 compute-0 sudo[244925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:24 compute-0 sudo[244925]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:24 compute-0 sudo[244984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqhzohdlbwbtghredfntwldtrfujivts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064044.2923298-2919-73045587826643/AnsiballZ_file.py'
Nov 25 09:47:24 compute-0 sudo[244984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:24 compute-0 sudo[244978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:47:24 compute-0 sudo[244978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:24 compute-0 python3.9[245003]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:24 compute-0 sudo[244984]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:24 compute-0 podman[245086]: 2025-11-25 09:47:24.799462924 +0000 UTC m=+0.027467838 container create 08e4daeca8beaa7ab107e817e3dbe789e9388d4c1bc728c0816288edb88c2b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_nobel, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:47:24 compute-0 systemd[1]: Started libpod-conmon-08e4daeca8beaa7ab107e817e3dbe789e9388d4c1bc728c0816288edb88c2b23.scope.
Nov 25 09:47:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:47:24 compute-0 podman[245086]: 2025-11-25 09:47:24.849795407 +0000 UTC m=+0.077800331 container init 08e4daeca8beaa7ab107e817e3dbe789e9388d4c1bc728c0816288edb88c2b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:47:24 compute-0 podman[245086]: 2025-11-25 09:47:24.854597513 +0000 UTC m=+0.082602427 container start 08e4daeca8beaa7ab107e817e3dbe789e9388d4c1bc728c0816288edb88c2b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_nobel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:47:24 compute-0 podman[245086]: 2025-11-25 09:47:24.855587639 +0000 UTC m=+0.083592554 container attach 08e4daeca8beaa7ab107e817e3dbe789e9388d4c1bc728c0816288edb88c2b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:47:24 compute-0 eager_nobel[245128]: 167 167
Nov 25 09:47:24 compute-0 systemd[1]: libpod-08e4daeca8beaa7ab107e817e3dbe789e9388d4c1bc728c0816288edb88c2b23.scope: Deactivated successfully.
Nov 25 09:47:24 compute-0 podman[245086]: 2025-11-25 09:47:24.859146711 +0000 UTC m=+0.087151625 container died 08e4daeca8beaa7ab107e817e3dbe789e9388d4c1bc728c0816288edb88c2b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_nobel, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-93777c2b6832e93ffe55e718df329f244224f6db5b1e58af04d2e305a8b67d03-merged.mount: Deactivated successfully.
Nov 25 09:47:24 compute-0 podman[245086]: 2025-11-25 09:47:24.878320898 +0000 UTC m=+0.106325813 container remove 08e4daeca8beaa7ab107e817e3dbe789e9388d4c1bc728c0816288edb88c2b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_nobel, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:47:24 compute-0 podman[245086]: 2025-11-25 09:47:24.788503146 +0000 UTC m=+0.016508081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:47:24 compute-0 systemd[1]: libpod-conmon-08e4daeca8beaa7ab107e817e3dbe789e9388d4c1bc728c0816288edb88c2b23.scope: Deactivated successfully.
Nov 25 09:47:24 compute-0 sudo[245219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coxgvtzrdpnetdbcruqwtwymqgefpyow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064044.768751-2919-84779261305170/AnsiballZ_file.py'
Nov 25 09:47:24 compute-0 sudo[245219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:25 compute-0 podman[245225]: 2025-11-25 09:47:25.000146378 +0000 UTC m=+0.028835627 container create f848ad73b4811ed16f0ecc147cf12d46e625612d83b3da8a9bc23a2696020324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:47:25 compute-0 systemd[1]: Started libpod-conmon-f848ad73b4811ed16f0ecc147cf12d46e625612d83b3da8a9bc23a2696020324.scope.
Nov 25 09:47:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8327c8389479da958ffb0ed0c3696216fb91a1f9724544ec129dbd28f2ed513/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8327c8389479da958ffb0ed0c3696216fb91a1f9724544ec129dbd28f2ed513/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8327c8389479da958ffb0ed0c3696216fb91a1f9724544ec129dbd28f2ed513/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8327c8389479da958ffb0ed0c3696216fb91a1f9724544ec129dbd28f2ed513/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:25 compute-0 podman[245225]: 2025-11-25 09:47:25.059502067 +0000 UTC m=+0.088191336 container init f848ad73b4811ed16f0ecc147cf12d46e625612d83b3da8a9bc23a2696020324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_feynman, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:47:25 compute-0 podman[245225]: 2025-11-25 09:47:25.066739635 +0000 UTC m=+0.095428884 container start f848ad73b4811ed16f0ecc147cf12d46e625612d83b3da8a9bc23a2696020324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:47:25 compute-0 podman[245225]: 2025-11-25 09:47:25.067773284 +0000 UTC m=+0.096462533 container attach f848ad73b4811ed16f0ecc147cf12d46e625612d83b3da8a9bc23a2696020324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_feynman, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:47:25 compute-0 podman[245225]: 2025-11-25 09:47:24.989204355 +0000 UTC m=+0.017893614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:47:25 compute-0 python3.9[245227]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:25 compute-0 sudo[245219]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:25 compute-0 recursing_feynman[245240]: {
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:     "1": [
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:         {
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             "devices": [
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "/dev/loop3"
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             ],
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             "lv_name": "ceph_lv0",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             "lv_size": "21470642176",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             "name": "ceph_lv0",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             "tags": {
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.cluster_name": "ceph",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.crush_device_class": "",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.encrypted": "0",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.osd_id": "1",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.type": "block",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.vdo": "0",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:                 "ceph.with_tpm": "0"
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             },
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             "type": "block",
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:             "vg_name": "ceph_vg0"
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:         }
Nov 25 09:47:25 compute-0 recursing_feynman[245240]:     ]
Nov 25 09:47:25 compute-0 recursing_feynman[245240]: }
Nov 25 09:47:25 compute-0 systemd[1]: libpod-f848ad73b4811ed16f0ecc147cf12d46e625612d83b3da8a9bc23a2696020324.scope: Deactivated successfully.
Nov 25 09:47:25 compute-0 podman[245225]: 2025-11-25 09:47:25.312995076 +0000 UTC m=+0.341684325 container died f848ad73b4811ed16f0ecc147cf12d46e625612d83b3da8a9bc23a2696020324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_feynman, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:47:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8327c8389479da958ffb0ed0c3696216fb91a1f9724544ec129dbd28f2ed513-merged.mount: Deactivated successfully.
Nov 25 09:47:25 compute-0 podman[245225]: 2025-11-25 09:47:25.334689276 +0000 UTC m=+0.363378524 container remove f848ad73b4811ed16f0ecc147cf12d46e625612d83b3da8a9bc23a2696020324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_feynman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:47:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:47:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:25.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:47:25 compute-0 systemd[1]: libpod-conmon-f848ad73b4811ed16f0ecc147cf12d46e625612d83b3da8a9bc23a2696020324.scope: Deactivated successfully.
Nov 25 09:47:25 compute-0 sudo[244978]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:25 compute-0 sudo[245358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:47:25 compute-0 sudo[245358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:25 compute-0 sudo[245358]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:25 compute-0 sudo[245407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:47:25 compute-0 sudo[245407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:25 compute-0 sudo[245456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwwahjjwsywvyriniitsydrmdzixzlgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064045.2743404-2919-278756897787286/AnsiballZ_file.py'
Nov 25 09:47:25 compute-0 sudo[245456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:25 compute-0 python3.9[245460]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:25 compute-0 sudo[245456]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:25 compute-0 podman[245516]: 2025-11-25 09:47:25.754854713 +0000 UTC m=+0.028788209 container create 5442361b6a53a9c176150d9292558fe5a1a00d9edabec292cf9e7b4887fab4cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_yonath, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Nov 25 09:47:25 compute-0 systemd[1]: Started libpod-conmon-5442361b6a53a9c176150d9292558fe5a1a00d9edabec292cf9e7b4887fab4cd.scope.
Nov 25 09:47:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:47:25 compute-0 podman[245516]: 2025-11-25 09:47:25.802754699 +0000 UTC m=+0.076688205 container init 5442361b6a53a9c176150d9292558fe5a1a00d9edabec292cf9e7b4887fab4cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_yonath, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 09:47:25 compute-0 podman[245516]: 2025-11-25 09:47:25.807153473 +0000 UTC m=+0.081086970 container start 5442361b6a53a9c176150d9292558fe5a1a00d9edabec292cf9e7b4887fab4cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:47:25 compute-0 reverent_yonath[245541]: 167 167
Nov 25 09:47:25 compute-0 podman[245516]: 2025-11-25 09:47:25.809605458 +0000 UTC m=+0.083538974 container attach 5442361b6a53a9c176150d9292558fe5a1a00d9edabec292cf9e7b4887fab4cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_yonath, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:47:25 compute-0 systemd[1]: libpod-5442361b6a53a9c176150d9292558fe5a1a00d9edabec292cf9e7b4887fab4cd.scope: Deactivated successfully.
Nov 25 09:47:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:25.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:25 compute-0 podman[245516]: 2025-11-25 09:47:25.812619372 +0000 UTC m=+0.086552868 container died 5442361b6a53a9c176150d9292558fe5a1a00d9edabec292cf9e7b4887fab4cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-68e21a6dd82955cb2a739b59537b66adc1d35a16c3e69014172b17600580feb4-merged.mount: Deactivated successfully.
Nov 25 09:47:25 compute-0 podman[245516]: 2025-11-25 09:47:25.830916333 +0000 UTC m=+0.104849830 container remove 5442361b6a53a9c176150d9292558fe5a1a00d9edabec292cf9e7b4887fab4cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_yonath, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:47:25 compute-0 podman[245516]: 2025-11-25 09:47:25.744632527 +0000 UTC m=+0.018566042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:47:25 compute-0 systemd[1]: libpod-conmon-5442361b6a53a9c176150d9292558fe5a1a00d9edabec292cf9e7b4887fab4cd.scope: Deactivated successfully.
Nov 25 09:47:25 compute-0 podman[245626]: 2025-11-25 09:47:25.951530048 +0000 UTC m=+0.029838828 container create c0e82621c1bb1aff48949d440e4c97201606fca0b4653638a23bcfd0b48fd30e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_joliot, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:47:25 compute-0 systemd[1]: Started libpod-conmon-c0e82621c1bb1aff48949d440e4c97201606fca0b4653638a23bcfd0b48fd30e.scope.
Nov 25 09:47:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9dc2b3b1e8223d7f65bcd87a7360158cf5fd6a66199401e0b50e0e0178f484/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9dc2b3b1e8223d7f65bcd87a7360158cf5fd6a66199401e0b50e0e0178f484/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9dc2b3b1e8223d7f65bcd87a7360158cf5fd6a66199401e0b50e0e0178f484/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9dc2b3b1e8223d7f65bcd87a7360158cf5fd6a66199401e0b50e0e0178f484/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:26 compute-0 podman[245626]: 2025-11-25 09:47:26.007854852 +0000 UTC m=+0.086163652 container init c0e82621c1bb1aff48949d440e4c97201606fca0b4653638a23bcfd0b48fd30e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_joliot, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:47:26 compute-0 podman[245626]: 2025-11-25 09:47:26.01271664 +0000 UTC m=+0.091025420 container start c0e82621c1bb1aff48949d440e4c97201606fca0b4653638a23bcfd0b48fd30e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_joliot, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:47:26 compute-0 podman[245626]: 2025-11-25 09:47:26.013688031 +0000 UTC m=+0.091996812 container attach c0e82621c1bb1aff48949d440e4c97201606fca0b4653638a23bcfd0b48fd30e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_joliot, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:47:26 compute-0 sudo[245693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvdlbmbuywhubpzrgeizftrambkkajsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064045.7978501-2985-137903577019361/AnsiballZ_file.py'
Nov 25 09:47:26 compute-0 sudo[245693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:26 compute-0 podman[245626]: 2025-11-25 09:47:25.939274639 +0000 UTC m=+0.017583439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:47:26 compute-0 python3.9[245696]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:26 compute-0 sudo[245693]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:26 compute-0 ceph-mon[74207]: pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:26 compute-0 epic_joliot[245663]: {}
Nov 25 09:47:26 compute-0 lvm[245893]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:47:26 compute-0 lvm[245893]: VG ceph_vg0 finished
Nov 25 09:47:26 compute-0 systemd[1]: libpod-c0e82621c1bb1aff48949d440e4c97201606fca0b4653638a23bcfd0b48fd30e.scope: Deactivated successfully.
Nov 25 09:47:26 compute-0 podman[245626]: 2025-11-25 09:47:26.504505991 +0000 UTC m=+0.582814791 container died c0e82621c1bb1aff48949d440e4c97201606fca0b4653638a23bcfd0b48fd30e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_joliot, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:47:26 compute-0 sudo[245919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvtynmhmshlsezgbybflmnixuunpceus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064046.3228617-2985-120792323705036/AnsiballZ_file.py'
Nov 25 09:47:26 compute-0 sudo[245919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a9dc2b3b1e8223d7f65bcd87a7360158cf5fd6a66199401e0b50e0e0178f484-merged.mount: Deactivated successfully.
Nov 25 09:47:26 compute-0 podman[245626]: 2025-11-25 09:47:26.535449352 +0000 UTC m=+0.613758132 container remove c0e82621c1bb1aff48949d440e4c97201606fca0b4653638a23bcfd0b48fd30e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:47:26 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 09:47:26 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 25 09:47:26 compute-0 systemd[1]: libpod-conmon-c0e82621c1bb1aff48949d440e4c97201606fca0b4653638a23bcfd0b48fd30e.scope: Deactivated successfully.
Nov 25 09:47:26 compute-0 sudo[245407]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:47:26 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:47:26 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:26 compute-0 sudo[245934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:47:26 compute-0 sudo[245934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:26 compute-0 sudo[245934]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:26 compute-0 python3.9[245928]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:26 compute-0 sudo[245919]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:26 compute-0 sudo[246108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufnoozhcwzqmmkmenkqcpodnwjiavjgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064046.8099568-2985-70256289091467/AnsiballZ_file.py'
Nov 25 09:47:26 compute-0 sudo[246108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:26.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:27.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:27.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:27.007Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:27 compute-0 python3.9[246110]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:27 compute-0 sudo[246108]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:47:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:27.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:47:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094727 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:47:27 compute-0 sudo[246260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkabcacvonpnlmumbsmuqysnnhatyvpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064047.2589943-2985-189492953159216/AnsiballZ_file.py'
Nov 25 09:47:27 compute-0 sudo[246260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:27 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:27 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:47:27 compute-0 python3.9[246262]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:27 compute-0 sudo[246260]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:47:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:27.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:47:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:27 compute-0 sudo[246413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcibzpcfsiwfeggubrxxkzvgvqdubqgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064047.702794-2985-126567875650672/AnsiballZ_file.py'
Nov 25 09:47:27 compute-0 sudo[246413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:28 compute-0 python3.9[246415]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:28 compute-0 sudo[246413]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:28 compute-0 sudo[246566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmyntpvrmkdlpmtpryeeqaxteiccqqme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064048.1400962-2985-136947170964376/AnsiballZ_file.py'
Nov 25 09:47:28 compute-0 sudo[246566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:28 compute-0 python3.9[246568]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:28 compute-0 sudo[246566]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:28 compute-0 ceph-mon[74207]: pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:28 compute-0 sudo[246718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyzpshzfbizpnmdltzhszlpmkdznqwrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064048.5868592-2985-3505488726353/AnsiballZ_file.py'
Nov 25 09:47:28 compute-0 sudo[246718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:28 compute-0 python3.9[246720]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:28 compute-0 sudo[246718]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:29.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:29.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:47:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:47:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:47:30] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:47:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:47:30] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:47:30 compute-0 ceph-mon[74207]: pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:47:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:31.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:31.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:32 compute-0 ceph-mon[74207]: pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:32 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 7.
Nov 25 09:47:32 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:47:32 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:47:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:32 compute-0 podman[246786]: 2025-11-25 09:47:32.826595958 +0000 UTC m=+0.029250372 container create 3847d700fc2fa3822e8ec766a11cdbe301b7162b08c46532706aed69925cb784 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 09:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a822c5ad7b41dbaf5564843471e831f440ead7f8e26bd70641c8a9f9c84266/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a822c5ad7b41dbaf5564843471e831f440ead7f8e26bd70641c8a9f9c84266/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a822c5ad7b41dbaf5564843471e831f440ead7f8e26bd70641c8a9f9c84266/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a822c5ad7b41dbaf5564843471e831f440ead7f8e26bd70641c8a9f9c84266/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:47:32 compute-0 podman[246786]: 2025-11-25 09:47:32.868054509 +0000 UTC m=+0.070708933 container init 3847d700fc2fa3822e8ec766a11cdbe301b7162b08c46532706aed69925cb784 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:47:32 compute-0 podman[246786]: 2025-11-25 09:47:32.872187711 +0000 UTC m=+0.074842126 container start 3847d700fc2fa3822e8ec766a11cdbe301b7162b08c46532706aed69925cb784 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:47:32 compute-0 bash[246786]: 3847d700fc2fa3822e8ec766a11cdbe301b7162b08c46532706aed69925cb784
Nov 25 09:47:32 compute-0 podman[246786]: 2025-11-25 09:47:32.81410232 +0000 UTC m=+0.016756755 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:47:32 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:47:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:32 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:47:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:32 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:47:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:32 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:47:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:32 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:47:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:32 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:47:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:32 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:47:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:32 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:47:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:32 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:47:33 compute-0 sudo[246964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahuowhulstavznsdvxzyemljeyjhvwmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064052.9558995-3310-247522663570329/AnsiballZ_getent.py'
Nov 25 09:47:33 compute-0 sudo[246964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:47:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:47:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:33.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:47:33 compute-0 python3.9[246966]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 25 09:47:33 compute-0 sudo[246964]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:33.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:33 compute-0 sudo[247119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvmlzemliuoxdzunrnfmuhaldowfxtll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064053.6060956-3334-157889446845759/AnsiballZ_group.py'
Nov 25 09:47:33 compute-0 sudo[247119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:34 compute-0 python3.9[247121]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 09:47:34 compute-0 groupadd[247122]: group added to /etc/group: name=nova, GID=42436
Nov 25 09:47:34 compute-0 groupadd[247122]: group added to /etc/gshadow: name=nova
Nov 25 09:47:34 compute-0 groupadd[247122]: new group: name=nova, GID=42436
Nov 25 09:47:34 compute-0 sudo[247119]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:34 compute-0 ceph-mon[74207]: pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:47:34 compute-0 sudo[247277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wggtqayddrhxnzjjvwgwxyuotxptrsan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064054.307204-3358-71699543087100/AnsiballZ_user.py'
Nov 25 09:47:34 compute-0 sudo[247277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:34 compute-0 python3.9[247279]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 09:47:34 compute-0 useradd[247281]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 25 09:47:34 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:47:34 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:47:34 compute-0 useradd[247281]: add 'nova' to group 'libvirt'
Nov 25 09:47:34 compute-0 useradd[247281]: add 'nova' to shadow group 'libvirt'
Nov 25 09:47:34 compute-0 sudo[247277]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:47:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:35.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:35 compute-0 sshd-session[247313]: Accepted publickey for zuul from 192.168.122.30 port 52860 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 09:47:35 compute-0 systemd-logind[744]: New session 55 of user zuul.
Nov 25 09:47:35 compute-0 systemd[1]: Started Session 55 of User zuul.
Nov 25 09:47:35 compute-0 sshd-session[247313]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:47:35 compute-0 sshd-session[247317]: Received disconnect from 192.168.122.30 port 52860:11: disconnected by user
Nov 25 09:47:35 compute-0 sshd-session[247317]: Disconnected from user zuul 192.168.122.30 port 52860
Nov 25 09:47:35 compute-0 sshd-session[247313]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:47:35 compute-0 systemd-logind[744]: Session 55 logged out. Waiting for processes to exit.
Nov 25 09:47:35 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Nov 25 09:47:35 compute-0 systemd-logind[744]: Removed session 55.
Nov 25 09:47:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:35.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:36 compute-0 python3.9[247468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:36 compute-0 ceph-mon[74207]: pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:47:36 compute-0 python3.9[247589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064055.9801157-3433-79300998434402/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:36.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:37.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:37.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:37.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:37 compute-0 python3.9[247739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:47:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:47:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:37.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:47:37 compute-0 python3.9[247815]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:37 compute-0 podman[247816]: 2025-11-25 09:47:37.467524814 +0000 UTC m=+0.041336121 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:47:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:37.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:37 compute-0 python3.9[247982]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:38 compute-0 python3.9[248104]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064057.5253398-3433-192687227159940/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:38 compute-0 ceph-mon[74207]: pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:47:38 compute-0 python3.9[248254]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:38 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:47:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:38 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:47:39 compute-0 python3.9[248375]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064058.3550603-3433-66566791735563/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:47:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:39.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:39 compute-0 python3.9[248525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:39.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:39 compute-0 python3.9[248647]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064059.1955218-3433-120318826073517/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:47:40] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:47:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:47:40] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:47:40 compute-0 python3.9[248798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:40 compute-0 ceph-mon[74207]: pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:47:40 compute-0 sudo[248920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:47:40 compute-0 sudo[248920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:47:40 compute-0 sudo[248920]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:40 compute-0 python3.9[248919]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064059.9807425-3433-3300944615227/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:41 compute-0 sudo[249094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrgjfqapnjbhtnwetgypnslslpzxqcga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064060.9795635-3682-192406110169158/AnsiballZ_file.py'
Nov 25 09:47:41 compute-0 sudo[249094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:41 compute-0 python3.9[249096]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:41 compute-0 sudo[249094]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:41.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:41 compute-0 sudo[249247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoklbglvkgtzqmwqfbxrbbrzipeykrkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064061.5366678-3706-108249306615507/AnsiballZ_copy.py'
Nov 25 09:47:41 compute-0 sudo[249247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:41.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:41 compute-0 python3.9[249249]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:47:41 compute-0 sudo[249247]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:42 compute-0 sudo[249400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbwepaenxqyggptopwhmdczuynueahow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064062.0923233-3730-2906246866852/AnsiballZ_stat.py'
Nov 25 09:47:42 compute-0 sudo[249400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:42 compute-0 python3.9[249402]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:47:42 compute-0 sudo[249400]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:42 compute-0 ceph-mon[74207]: pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:47:42 compute-0 sudo[249552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnaluwzablmeufbcndtmcvdsanucafde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064062.6352918-3754-234809958532615/AnsiballZ_stat.py'
Nov 25 09:47:42 compute-0 sudo[249552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:42 compute-0 python3.9[249554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:42 compute-0 sudo[249552]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:43 compute-0 sudo[249675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opcszblqpbeqcamfsmhgjdppgjtwjuwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064062.6352918-3754-234809958532615/AnsiballZ_copy.py'
Nov 25 09:47:43 compute-0 sudo[249675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:47:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:47:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:43.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:47:43 compute-0 python3.9[249677]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764064062.6352918-3754-234809958532615/.source _original_basename=.m16o2kjn follow=False checksum=c7a71e26033212c4c95a153c95deb0d1fcc682f8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 25 09:47:43 compute-0 sudo[249675]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:43.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:44 compute-0 python3.9[249831]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:47:44 compute-0 ceph-mon[74207]: pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:47:44 compute-0 python3.9[249983]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:47:44
Nov 25 09:47:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:47:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:47:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', '.nfs', 'default.rgw.log', 'default.rgw.control', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'backups']
Nov 25 09:47:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:47:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:47:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:47:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:47:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:47:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:47:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:47:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:47:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:47:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:44 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:47:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:47:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:47:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:47:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:47:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:47:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:47:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:47:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:47:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:47:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:47:45 compute-0 python3.9[250104]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064064.3363235-3832-115825716357534/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:47:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:47:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:45.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:47:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:45 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:45 compute-0 python3.9[250270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 09:47:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:47:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:45.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:45 compute-0 podman[250366]: 2025-11-25 09:47:45.863808299 +0000 UTC m=+0.057216354 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:47:45 compute-0 python3.9[250404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064065.254122-3877-244301063976225/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 09:47:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:46 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x55faaf81ef40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:46 compute-0 sudo[250566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxvsusjyeknvzlbuzkozurdtkxlafqhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064066.404733-3928-121984048797700/AnsiballZ_container_config_data.py'
Nov 25 09:47:46 compute-0 sudo[250566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:46 compute-0 ceph-mon[74207]: pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:47:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:46 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:46 compute-0 python3.9[250568]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 25 09:47:46 compute-0 sudo[250566]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:46.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:47.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:47.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:47.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:47 compute-0 sudo[250718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqjxdpngyytqyryeyltdbycnickesjjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064067.004432-3955-98014350107865/AnsiballZ_container_config_hash.py'
Nov 25 09:47:47 compute-0 sudo[250718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:47:47 compute-0 python3.9[250720]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 09:47:47 compute-0 sudo[250718]: pam_unix(sudo:session): session closed for user root
Nov 25 09:47:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:47:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:47.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:47:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094747 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:47:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:47 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37e0001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:47.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:47 compute-0 sudo[250871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yflxhctbgpvxwhwoimydanetqyikldtg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764064067.7076838-3985-103627521255011/AnsiballZ_edpm_container_manage.py'
Nov 25 09:47:47 compute-0 sudo[250871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:47:48 compute-0 python3[250874]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 09:47:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:48 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x55faaf81ef40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:48 compute-0 ceph-mon[74207]: pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:47:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:48 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x55faaf81ef40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:47:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:47:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:47:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:49.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:47:49 compute-0 kernel: ganesha.nfsd[250106]: segfault at 50 ip 00007f38846f332e sp 00007f384bffe210 error 4 in libntirpc.so.5.8[7f38846d8000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 25 09:47:49 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:47:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[246797]: 25/11/2025 09:47:49 : epoch 69257b34 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x55faaf81ef40 fd 38 proxy ignored for local
Nov 25 09:47:49 compute-0 systemd[1]: Started Process Core Dump (PID 250905/UID 0).
Nov 25 09:47:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:49.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:47:50] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 25 09:47:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:47:50] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 25 09:47:50 compute-0 systemd-coredump[250906]: Process 246801 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007f38846f332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:47:50 compute-0 systemd[1]: systemd-coredump@7-250905-0.service: Deactivated successfully.
Nov 25 09:47:50 compute-0 podman[250913]: 2025-11-25 09:47:50.62678115 +0000 UTC m=+0.024651461 container died 3847d700fc2fa3822e8ec766a11cdbe301b7162b08c46532706aed69925cb784 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:47:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8a822c5ad7b41dbaf5564843471e831f440ead7f8e26bd70641c8a9f9c84266-merged.mount: Deactivated successfully.
Nov 25 09:47:50 compute-0 podman[250913]: 2025-11-25 09:47:50.644519666 +0000 UTC m=+0.042389957 container remove 3847d700fc2fa3822e8ec766a11cdbe301b7162b08c46532706aed69925cb784 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:47:50 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:47:50 compute-0 ceph-mon[74207]: pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:47:50 compute-0 podman[250925]: 2025-11-25 09:47:50.733193962 +0000 UTC m=+0.064010762 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 09:47:50 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:47:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:47:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:47:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:51.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:47:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:51.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:52 compute-0 ceph-mon[74207]: pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:47:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:47:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:53.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:47:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:53.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:54 compute-0 ceph-mon[74207]: pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:47:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:47:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:47:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:55.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:47:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094755 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:47:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:47:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:55.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:47:56 compute-0 ceph-mon[74207]: pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:57.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:57.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:57.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:47:57.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:47:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:57.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:47:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:57.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:58 compute-0 ceph-mon[74207]: pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:47:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:47:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:47:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:47:59 compute-0 ceph-mon[74207]: pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:47:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:47:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:47:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:59.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:47:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:47:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:48:00 compute-0 podman[250885]: 2025-11-25 09:48:00.011080065 +0000 UTC m=+11.854762132 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 09:48:00 compute-0 podman[251016]: 2025-11-25 09:48:00.106720446 +0000 UTC m=+0.028495179 container create a4538bf8fe4f6c62f44a200a54e54126b3f9307b40a0e9865c19c88412037212 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:48:00 compute-0 podman[251016]: 2025-11-25 09:48:00.09285436 +0000 UTC m=+0.014629114 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 09:48:00 compute-0 python3[250874]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 25 09:48:00 compute-0 sudo[250871]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:48:00] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:48:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:48:00] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:48:00 compute-0 sudo[251193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyzbobgwrurffcxnylubsgscukvfujui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064080.326228-4009-58732439942281/AnsiballZ_stat.py'
Nov 25 09:48:00 compute-0 sudo[251193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:00 compute-0 python3.9[251195]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:48:00 compute-0 sudo[251193]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:00 compute-0 sudo[251201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:48:00 compute-0 sudo[251201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:00 compute-0 sudo[251201]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:48:00 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 8.
Nov 25 09:48:00 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:48:00 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:48:01 compute-0 podman[251287]: 2025-11-25 09:48:01.077633618 +0000 UTC m=+0.029400687 container create bf08aebaf45a5f98995f2aa0990acb39be3ad8282b92f73ae3cb21c6130e2d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 09:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd20b6ddf52584e26d5b576847830cb5440711394c81642e12bb190aa6d5683/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd20b6ddf52584e26d5b576847830cb5440711394c81642e12bb190aa6d5683/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd20b6ddf52584e26d5b576847830cb5440711394c81642e12bb190aa6d5683/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd20b6ddf52584e26d5b576847830cb5440711394c81642e12bb190aa6d5683/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:01 compute-0 podman[251287]: 2025-11-25 09:48:01.120773238 +0000 UTC m=+0.072540317 container init bf08aebaf45a5f98995f2aa0990acb39be3ad8282b92f73ae3cb21c6130e2d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:48:01 compute-0 podman[251287]: 2025-11-25 09:48:01.12490074 +0000 UTC m=+0.076667810 container start bf08aebaf45a5f98995f2aa0990acb39be3ad8282b92f73ae3cb21c6130e2d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:48:01 compute-0 bash[251287]: bf08aebaf45a5f98995f2aa0990acb39be3ad8282b92f73ae3cb21c6130e2d6c
Nov 25 09:48:01 compute-0 podman[251287]: 2025-11-25 09:48:01.065460886 +0000 UTC m=+0.017227965 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:48:01 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:48:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:01 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:48:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:01 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:48:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:01 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:48:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:01 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:48:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:01 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:48:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:01 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:48:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:01 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:48:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:01 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:48:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:01.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:01 compute-0 sudo[251466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exkhpzkoolzgfsygsemiwyduxhhkvepc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064081.267459-4045-249138648089308/AnsiballZ_container_config_data.py'
Nov 25 09:48:01 compute-0 sudo[251466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:01 compute-0 python3.9[251468]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 25 09:48:01 compute-0 sudo[251466]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:01.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:01 compute-0 ceph-mon[74207]: pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:02 compute-0 sudo[251620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxxffegbhjpomrkcqwmviakjwhsbtiwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064082.5362248-4072-64669336876388/AnsiballZ_container_config_hash.py'
Nov 25 09:48:02 compute-0 sudo[251620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:02 compute-0 python3.9[251622]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 09:48:02 compute-0 sudo[251620]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:03 compute-0 sudo[251772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kabghrzwnixqwgbcyfitekygrsxsksmo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764064083.2320032-4102-87079266265082/AnsiballZ_edpm_container_manage.py'
Nov 25 09:48:03 compute-0 sudo[251772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:03 compute-0 python3[251774]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 09:48:03 compute-0 podman[251803]: 2025-11-25 09:48:03.758288826 +0000 UTC m=+0.028201035 container create e87ef96d501600f5848bb5f6740b0329ecc8416337e73330596688b61a95aafc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute)
Nov 25 09:48:03 compute-0 podman[251803]: 2025-11-25 09:48:03.744749127 +0000 UTC m=+0.014661346 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 09:48:03 compute-0 python3[251774]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 25 09:48:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:03.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:03 compute-0 sudo[251772]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:04 compute-0 sudo[251981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reuiaewkgvflsjqokvsmbljdactxsubb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064084.1147237-4126-108012087844842/AnsiballZ_stat.py'
Nov 25 09:48:04 compute-0 sudo[251981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:04 compute-0 ceph-mon[74207]: pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:04 compute-0 python3.9[251983]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:48:04 compute-0 sudo[251981]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:04 compute-0 sudo[252135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryadyuoycrlufgunftnpemsrbahpxvlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064084.7802916-4153-14199339596617/AnsiballZ_file.py'
Nov 25 09:48:04 compute-0 sudo[252135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:05 compute-0 python3.9[252137]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:48:05 compute-0 sudo[252135]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:48:05.376 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:48:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:48:05.377 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:48:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:48:05.377 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:48:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:05 compute-0 sudo[252286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fchbdedptkepgaybrkrmtepicpavgyus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064085.168067-4153-213277439907994/AnsiballZ_copy.py'
Nov 25 09:48:05 compute-0 sudo[252286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:05 compute-0 python3.9[252288]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764064085.168067-4153-213277439907994/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:48:05 compute-0 sudo[252286]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:05 compute-0 sudo[252363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njldxgrajxfhjkogiarcwcbzuzqacbjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064085.168067-4153-213277439907994/AnsiballZ_systemd.py'
Nov 25 09:48:05 compute-0 sudo[252363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:05.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:05 compute-0 python3.9[252365]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:48:06 compute-0 systemd[1]: Reloading.
Nov 25 09:48:06 compute-0 systemd-sysv-generator[252390]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:48:06 compute-0 systemd-rc-local-generator[252386]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:48:06 compute-0 sudo[252363]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:06 compute-0 ceph-mon[74207]: pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:06 compute-0 sudo[252475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfxunntilunnnykxtpaflgopecgvpiig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064085.168067-4153-213277439907994/AnsiballZ_systemd.py'
Nov 25 09:48:06 compute-0 sudo[252475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:06 compute-0 python3.9[252477]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 09:48:06 compute-0 systemd[1]: Reloading.
Nov 25 09:48:06 compute-0 systemd-sysv-generator[252503]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 09:48:06 compute-0 systemd-rc-local-generator[252500]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:48:06 compute-0 systemd[1]: Starting nova_compute container...
Nov 25 09:48:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:07.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:07.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:07.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:07.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063d26b4076e85b4dc82e69c50903e6f6fd7eb2b26aa202d59e5d1f2e0822436/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063d26b4076e85b4dc82e69c50903e6f6fd7eb2b26aa202d59e5d1f2e0822436/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063d26b4076e85b4dc82e69c50903e6f6fd7eb2b26aa202d59e5d1f2e0822436/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063d26b4076e85b4dc82e69c50903e6f6fd7eb2b26aa202d59e5d1f2e0822436/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063d26b4076e85b4dc82e69c50903e6f6fd7eb2b26aa202d59e5d1f2e0822436/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:07 compute-0 podman[252517]: 2025-11-25 09:48:07.066432413 +0000 UTC m=+0.065309092 container init e87ef96d501600f5848bb5f6740b0329ecc8416337e73330596688b61a95aafc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:48:07 compute-0 podman[252517]: 2025-11-25 09:48:07.073951231 +0000 UTC m=+0.072827890 container start e87ef96d501600f5848bb5f6740b0329ecc8416337e73330596688b61a95aafc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:48:07 compute-0 podman[252517]: nova_compute
Nov 25 09:48:07 compute-0 nova_compute[252529]: + sudo -E kolla_set_configs
Nov 25 09:48:07 compute-0 systemd[1]: Started nova_compute container.
Nov 25 09:48:07 compute-0 sudo[252475]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Validating config file
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Copying service configuration files
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Deleting /etc/ceph
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Creating directory /etc/ceph
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Writing out command to execute
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:07 compute-0 nova_compute[252529]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 09:48:07 compute-0 nova_compute[252529]: ++ cat /run_command
Nov 25 09:48:07 compute-0 nova_compute[252529]: + CMD=nova-compute
Nov 25 09:48:07 compute-0 nova_compute[252529]: + ARGS=
Nov 25 09:48:07 compute-0 nova_compute[252529]: + sudo kolla_copy_cacerts
Nov 25 09:48:07 compute-0 nova_compute[252529]: + [[ ! -n '' ]]
Nov 25 09:48:07 compute-0 nova_compute[252529]: + . kolla_extend_start
Nov 25 09:48:07 compute-0 nova_compute[252529]: Running command: 'nova-compute'
Nov 25 09:48:07 compute-0 nova_compute[252529]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 09:48:07 compute-0 nova_compute[252529]: + umask 0022
Nov 25 09:48:07 compute-0 nova_compute[252529]: + exec nova-compute
Nov 25 09:48:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:07 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:48:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:07 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:48:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:48:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:48:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:07.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:48:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:07.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:07 compute-0 podman[252568]: 2025-11-25 09:48:07.980608067 +0000 UTC m=+0.043262794 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:48:08 compute-0 ceph-mon[74207]: pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:48:08 compute-0 nova_compute[252529]: 2025-11-25 09:48:08.984 252533 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 09:48:08 compute-0 nova_compute[252529]: 2025-11-25 09:48:08.985 252533 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 09:48:08 compute-0 nova_compute[252529]: 2025-11-25 09:48:08.985 252533 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 09:48:08 compute-0 nova_compute[252529]: 2025-11-25 09:48:08.985 252533 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 25 09:48:09 compute-0 python3.9[252711]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.100 252533 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.112 252533 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.112 252533 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 25 09:48:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:48:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.626 252533 INFO nova.virt.driver [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.729 252533 INFO nova.compute.provider_config [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 25 09:48:09 compute-0 python3.9[252863]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.751 252533 DEBUG oslo_concurrency.lockutils [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.751 252533 DEBUG oslo_concurrency.lockutils [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.752 252533 DEBUG oslo_concurrency.lockutils [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.752 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.752 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.752 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.752 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.753 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.753 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.753 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.753 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.753 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.753 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.753 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.754 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.754 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.754 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.754 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.754 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.754 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.754 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.755 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.755 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.755 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.755 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.755 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.755 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.756 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.756 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.756 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.756 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.756 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.756 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.756 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.757 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.757 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.757 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.757 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.757 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.757 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.757 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.758 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.758 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.758 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.758 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.758 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.758 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.758 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.759 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.759 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.759 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.759 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.759 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.759 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.760 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.760 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.760 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.760 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.760 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.760 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.760 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.760 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.761 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.761 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.761 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.761 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.761 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.761 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.761 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.762 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.762 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.762 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.762 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.762 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.762 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.762 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.763 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.763 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.763 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.764 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.764 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.764 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.764 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.764 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.764 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.765 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.765 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.765 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.765 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.765 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.765 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.765 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.766 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.766 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.766 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.766 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.766 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.766 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.766 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.767 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.767 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.767 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.767 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.767 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.767 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.767 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.768 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.768 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.768 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.768 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.768 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.768 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.768 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.768 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.769 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.769 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.769 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.769 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.769 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.769 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.769 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.770 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.770 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.770 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.770 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.770 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.770 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.770 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.771 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.771 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.771 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.771 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.771 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.771 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.771 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.772 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.772 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.772 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.772 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.772 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.772 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.772 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.772 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.773 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.773 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.773 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.773 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.773 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.773 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.773 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.774 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.774 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.774 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.774 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.774 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.775 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.775 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.775 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.775 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.775 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.775 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.776 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.776 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.776 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.776 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.776 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.776 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.776 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.777 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.777 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.777 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.777 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.777 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.777 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.777 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.778 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.778 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.778 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.778 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.778 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.778 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.778 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.779 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.779 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.779 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.779 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.779 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.779 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.779 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.780 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.780 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.780 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.780 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.780 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.780 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.780 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.780 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.781 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.781 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.781 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.781 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.781 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.781 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.781 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.782 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.782 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.782 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.782 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.782 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.782 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.782 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.783 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.783 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.783 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.783 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.783 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.783 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.783 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.784 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.784 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.784 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.784 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.784 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.784 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.784 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.784 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.785 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.785 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.785 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.785 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.785 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.785 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.785 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.786 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.786 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.786 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.786 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.786 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.786 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.786 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.786 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.787 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.787 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.787 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.787 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.787 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.787 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.787 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.788 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.788 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.788 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.788 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.788 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.788 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.788 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.789 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.789 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.789 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.789 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.789 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.789 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.789 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.789 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.790 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.790 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.790 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.790 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.790 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.790 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.791 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.791 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.791 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.791 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.791 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.791 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.791 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.792 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.792 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.792 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.792 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.792 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.792 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.792 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.793 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.793 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.793 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.793 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.793 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.793 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.793 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.794 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.794 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.794 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.794 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.794 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.794 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.794 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.794 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.795 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.795 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.795 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.795 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.795 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.795 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.795 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.796 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.796 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.796 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.796 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.796 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.796 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.796 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.797 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.797 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.797 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.797 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.797 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.797 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.797 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.797 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.798 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.798 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.798 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.798 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.798 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.798 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.798 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.799 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.799 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.799 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.799 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.799 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.799 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.799 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.799 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.800 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.800 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.800 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.800 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.800 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.800 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.800 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.801 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.801 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.801 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.801 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.801 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.801 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.801 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.802 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.802 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.802 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.802 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.802 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.802 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.802 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.802 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.803 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.803 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.803 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.803 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.803 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.804 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.804 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.804 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.804 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.804 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.804 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.805 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.805 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.805 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.805 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.805 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.806 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.806 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.806 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.806 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.807 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.807 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.807 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.807 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.807 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.807 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.807 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.808 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.808 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.808 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.808 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.808 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.808 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.808 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.808 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.809 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.809 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.809 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.809 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.809 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.809 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.810 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.810 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.810 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.810 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.810 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.810 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.810 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.810 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.811 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.811 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.811 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.811 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.811 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.811 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.811 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.812 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.812 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.812 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.812 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.812 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.812 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.812 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.813 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.813 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.813 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.813 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.813 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.813 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.813 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.813 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.814 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.814 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.814 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.814 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.814 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.814 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.814 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.815 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.815 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.815 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.815 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.815 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.815 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.815 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.815 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.816 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.816 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.816 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.816 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.816 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.816 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.816 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.817 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.817 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.817 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.817 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.817 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.817 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.817 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.818 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.818 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.818 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.818 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.818 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.818 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.818 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.818 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.819 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.819 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.819 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.819 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.819 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.819 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.819 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.820 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.820 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.820 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.820 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.820 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.820 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.820 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.821 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.821 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.821 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.821 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.821 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.821 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.821 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.821 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.822 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.822 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.822 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.822 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.822 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.822 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.822 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.823 252533 WARNING oslo_config.cfg [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 09:48:09 compute-0 nova_compute[252529]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 09:48:09 compute-0 nova_compute[252529]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 09:48:09 compute-0 nova_compute[252529]: and ``live_migration_inbound_addr`` respectively.
Nov 25 09:48:09 compute-0 nova_compute[252529]: ).  Its value may be silently ignored in the future.
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.823 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.823 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.823 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.823 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.823 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.824 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.824 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.824 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.824 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.824 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.824 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.824 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.825 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.825 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.825 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.825 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.825 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.825 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.825 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.rbd_secret_uuid        = af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.826 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.826 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.826 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.826 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.826 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.826 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.826 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.826 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.827 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.827 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.827 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.827 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.827 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.827 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.827 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.828 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.828 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.828 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.828 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.828 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.828 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.828 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.829 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.829 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.829 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.829 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.829 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.829 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.829 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.830 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.830 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.830 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.830 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.830 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.830 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.830 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.831 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.831 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.831 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.831 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.831 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.831 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.831 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.831 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.832 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.832 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.832 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.832 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.832 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.832 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.832 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.833 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.833 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.833 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.833 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.833 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.833 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.833 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.833 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.834 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.834 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.834 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.834 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.834 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.834 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.834 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.835 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.835 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.835 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.835 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.835 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.835 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.835 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.836 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.836 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.836 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.836 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.836 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.836 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.836 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.836 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.837 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.837 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.837 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.837 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.837 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.837 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.837 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.838 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.838 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.838 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.838 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.838 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.838 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.838 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.838 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.839 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.839 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.839 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.839 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.839 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.839 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.839 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.840 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.840 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.840 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.840 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.840 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.840 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.840 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.841 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.841 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.841 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.841 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.841 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.841 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.841 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.841 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.842 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.842 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.842 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.842 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.842 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.842 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.843 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.843 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.843 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.843 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.843 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:09.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.843 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.844 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.844 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.844 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.844 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.845 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.845 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.845 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.845 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.845 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.845 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.846 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.846 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.846 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.846 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.846 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.846 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.846 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.847 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.847 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.847 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.847 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.847 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.847 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.847 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.848 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.848 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.848 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.848 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.848 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.849 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.849 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.849 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.849 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.849 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.849 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.849 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.850 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.850 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.850 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.850 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.850 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.850 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.851 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.851 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.851 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.851 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.851 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.851 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.851 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.852 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.852 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.852 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.852 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.852 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.852 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.852 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.853 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.853 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.853 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.853 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.853 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.853 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.853 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.854 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.854 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.854 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.854 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.854 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.854 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.854 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.855 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.855 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.855 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.855 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.855 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.855 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.855 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.855 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.856 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.856 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.856 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.856 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.856 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.856 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.856 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.857 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.857 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.857 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.857 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.857 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.857 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.857 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.858 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.858 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.858 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.858 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.858 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.858 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.858 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.859 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.859 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.859 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.859 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.859 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.859 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.860 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.860 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.860 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.860 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.860 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.860 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.860 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.861 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.861 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.861 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.861 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.861 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.861 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.861 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.862 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.862 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.862 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.862 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.862 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.862 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.862 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.863 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.863 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.863 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.863 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.863 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.863 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.863 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.863 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.864 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.864 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.864 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.864 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.864 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.864 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.864 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.865 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.865 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.865 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.865 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.865 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.865 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.866 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.866 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.866 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.866 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.866 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.866 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.866 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.867 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.867 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.867 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.867 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.867 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.867 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.867 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.867 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.868 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.868 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.868 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.868 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.868 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.868 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.868 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.869 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.869 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.869 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.869 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.869 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.869 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.869 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.870 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.870 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.870 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.870 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.870 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.870 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.870 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.871 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.871 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.871 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.871 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.871 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.871 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.871 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.872 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.872 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.872 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.872 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.872 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.872 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.872 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.873 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.873 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.873 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.873 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.873 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.873 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.873 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.873 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.874 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.874 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.874 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.874 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.874 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.874 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.874 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.875 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.875 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.875 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.875 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.875 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.875 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.875 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.875 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.876 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.876 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.876 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.876 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.876 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.876 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.876 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.877 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.877 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.877 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.877 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.877 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.877 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.877 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.877 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.878 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.878 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.878 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.878 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.878 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.878 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.878 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.879 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.879 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.879 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.879 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.879 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.879 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.879 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.880 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.880 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.880 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.880 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.880 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.880 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.880 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.881 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.881 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.881 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.881 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.881 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.881 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.881 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.882 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.882 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.882 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.882 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.882 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.882 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.882 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.882 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.883 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.883 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.883 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.883 252533 DEBUG oslo_service.service [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.884 252533 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.895 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.895 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.895 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.896 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 25 09:48:09 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 09:48:09 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.941 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4217443910> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.944 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4217443910> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.944 252533 INFO nova.virt.libvirt.driver [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Connection event '1' reason 'None'
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.958 252533 WARNING nova.virt.libvirt.driver [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 09:48:09 compute-0 nova_compute[252529]: 2025-11-25 09:48:09.958 252533 DEBUG nova.virt.libvirt.volume.mount [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 25 09:48:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:48:10] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:48:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:48:10] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:48:10 compute-0 ceph-mon[74207]: pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:48:10 compute-0 python3.9[253067]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.630 252533 INFO nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 09:48:10 compute-0 nova_compute[252529]: 
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <host>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <uuid>0f2c6148-bac3-4049-9f53-233f21cb16c0</uuid>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <cpu>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <arch>x86_64</arch>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model>EPYC-Milan-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <vendor>AMD</vendor>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <microcode version='167776725'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <signature family='25' model='1' stepping='1'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <maxphysaddr mode='emulate' bits='48'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='x2apic'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='tsc-deadline'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='osxsave'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='hypervisor'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='tsc_adjust'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='ospke'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='vaes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='vpclmulqdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='spec-ctrl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='stibp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='arch-capabilities'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='ssbd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='cmp_legacy'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='virt-ssbd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='lbrv'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='tsc-scale'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='vmcb-clean'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='pause-filter'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='pfthreshold'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='v-vmsave-vmload'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='vgif'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='rdctl-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='skip-l1dfl-vmentry'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='mds-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature name='pschange-mc-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <pages unit='KiB' size='4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <pages unit='KiB' size='2048'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <pages unit='KiB' size='1048576'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </cpu>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <power_management>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <suspend_mem/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </power_management>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <iommu support='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <migration_features>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <live/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <uri_transports>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <uri_transport>tcp</uri_transport>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <uri_transport>rdma</uri_transport>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </uri_transports>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </migration_features>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <topology>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <cells num='1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <cell id='0'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:           <memory unit='KiB'>7865360</memory>
Nov 25 09:48:10 compute-0 nova_compute[252529]:           <pages unit='KiB' size='4'>1966340</pages>
Nov 25 09:48:10 compute-0 nova_compute[252529]:           <pages unit='KiB' size='2048'>0</pages>
Nov 25 09:48:10 compute-0 nova_compute[252529]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 25 09:48:10 compute-0 nova_compute[252529]:           <distances>
Nov 25 09:48:10 compute-0 nova_compute[252529]:             <sibling id='0' value='10'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:           </distances>
Nov 25 09:48:10 compute-0 nova_compute[252529]:           <cpus num='4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:           </cpus>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         </cell>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </cells>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </topology>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <cache>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </cache>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <secmodel>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model>selinux</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <doi>0</doi>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </secmodel>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <secmodel>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model>dac</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <doi>0</doi>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </secmodel>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </host>
Nov 25 09:48:10 compute-0 nova_compute[252529]: 
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <guest>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <os_type>hvm</os_type>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <arch name='i686'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <wordsize>32</wordsize>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <domain type='qemu'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <domain type='kvm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </arch>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <features>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <pae/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <nonpae/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <acpi default='on' toggle='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <apic default='on' toggle='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <cpuselection/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <deviceboot/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <disksnapshot default='on' toggle='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <externalSnapshot/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </features>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </guest>
Nov 25 09:48:10 compute-0 nova_compute[252529]: 
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <guest>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <os_type>hvm</os_type>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <arch name='x86_64'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <wordsize>64</wordsize>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <domain type='qemu'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <domain type='kvm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </arch>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <features>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <acpi default='on' toggle='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <apic default='on' toggle='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <cpuselection/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <deviceboot/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <disksnapshot default='on' toggle='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <externalSnapshot/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </features>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </guest>
Nov 25 09:48:10 compute-0 nova_compute[252529]: 
Nov 25 09:48:10 compute-0 nova_compute[252529]: </capabilities>
Nov 25 09:48:10 compute-0 nova_compute[252529]: 
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.634 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.648 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 09:48:10 compute-0 nova_compute[252529]: <domainCapabilities>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <domain>kvm</domain>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <arch>i686</arch>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <vcpu max='4096'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <iothreads supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <os supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <enum name='firmware'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <loader supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>rom</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pflash</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='readonly'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>yes</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>no</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='secure'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>no</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </loader>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </os>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <cpu>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>on</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>off</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='maximumMigratable'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>on</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>off</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <vendor>AMD</vendor>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='succor'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='custom' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cooperlake'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Denverton'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Denverton-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='GraniteRapids'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10-128'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10-256'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10-512'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='KnightsMill'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SierraForest'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='athlon'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='athlon-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='core2duo'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='core2duo-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='coreduo'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='coreduo-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='n270'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='n270-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='phenom'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='phenom-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </cpu>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <memoryBacking supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <enum name='sourceType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>file</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>anonymous</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>memfd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </memoryBacking>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <devices>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <disk supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='diskDevice'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>disk</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>cdrom</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>floppy</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>lun</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='bus'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>fdc</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>scsi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>usb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>sata</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </disk>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <graphics supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vnc</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>egl-headless</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>dbus</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </graphics>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <video supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='modelType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vga</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>cirrus</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>none</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>bochs</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>ramfb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </video>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <hostdev supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='mode'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>subsystem</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='startupPolicy'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>default</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>mandatory</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>requisite</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>optional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='subsysType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>usb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pci</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>scsi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='capsType'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='pciBackend'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </hostdev>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <rng supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>random</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>egd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>builtin</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </rng>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <filesystem supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='driverType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>path</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>handle</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtiofs</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </filesystem>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <tpm supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tpm-tis</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tpm-crb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>emulator</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>external</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendVersion'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>2.0</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </tpm>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <redirdev supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='bus'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>usb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </redirdev>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <channel supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pty</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>unix</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </channel>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <crypto supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>qemu</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>builtin</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </crypto>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <interface supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>default</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>passt</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </interface>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <panic supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>isa</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>hyperv</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </panic>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <console supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>null</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vc</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pty</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>dev</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>file</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pipe</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>stdio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>udp</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tcp</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>unix</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>qemu-vdagent</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>dbus</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </console>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </devices>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <features>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <gic supported='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <genid supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <backup supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <async-teardown supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <ps2 supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <sev supported='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <sgx supported='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <hyperv supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='features'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>relaxed</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vapic</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>spinlocks</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vpindex</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>runtime</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>synic</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>stimer</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>reset</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vendor_id</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>frequencies</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>reenlightenment</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tlbflush</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>ipi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>avic</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>emsr_bitmap</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>xmm_input</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <defaults>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </defaults>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </hyperv>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <launchSecurity supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='sectype'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tdx</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </launchSecurity>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </features>
Nov 25 09:48:10 compute-0 nova_compute[252529]: </domainCapabilities>
Nov 25 09:48:10 compute-0 nova_compute[252529]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.651 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 09:48:10 compute-0 nova_compute[252529]: <domainCapabilities>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <domain>kvm</domain>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <arch>i686</arch>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <vcpu max='240'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <iothreads supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <os supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <enum name='firmware'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <loader supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>rom</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pflash</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='readonly'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>yes</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>no</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='secure'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>no</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </loader>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </os>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <cpu>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>on</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>off</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='maximumMigratable'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>on</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>off</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <vendor>AMD</vendor>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='succor'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='custom' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cooperlake'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Denverton'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Denverton-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='GraniteRapids'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10-128'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10-256'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10-512'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='KnightsMill'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SierraForest'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='athlon'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='athlon-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='core2duo'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='core2duo-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='coreduo'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='coreduo-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='n270'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='n270-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='phenom'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='phenom-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </cpu>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <memoryBacking supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <enum name='sourceType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>file</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>anonymous</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>memfd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </memoryBacking>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <devices>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <disk supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='diskDevice'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>disk</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>cdrom</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>floppy</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>lun</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='bus'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>ide</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>fdc</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>scsi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>usb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>sata</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </disk>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <graphics supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vnc</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>egl-headless</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>dbus</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </graphics>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <video supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='modelType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vga</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>cirrus</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>none</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>bochs</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>ramfb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </video>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <hostdev supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='mode'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>subsystem</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='startupPolicy'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>default</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>mandatory</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>requisite</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>optional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='subsysType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>usb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pci</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>scsi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='capsType'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='pciBackend'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </hostdev>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <rng supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>random</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>egd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>builtin</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </rng>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <filesystem supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='driverType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>path</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>handle</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtiofs</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </filesystem>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <tpm supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tpm-tis</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tpm-crb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>emulator</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>external</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendVersion'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>2.0</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </tpm>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <redirdev supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='bus'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>usb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </redirdev>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <channel supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pty</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>unix</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </channel>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <crypto supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>qemu</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>builtin</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </crypto>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <interface supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>default</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>passt</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </interface>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <panic supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>isa</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>hyperv</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </panic>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <console supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>null</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vc</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pty</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>dev</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>file</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pipe</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>stdio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>udp</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tcp</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>unix</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>qemu-vdagent</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>dbus</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </console>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </devices>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <features>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <gic supported='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <genid supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <backup supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <async-teardown supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <ps2 supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <sev supported='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <sgx supported='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <hyperv supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='features'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>relaxed</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vapic</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>spinlocks</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vpindex</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>runtime</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>synic</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>stimer</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>reset</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vendor_id</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>frequencies</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>reenlightenment</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tlbflush</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>ipi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>avic</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>emsr_bitmap</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>xmm_input</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <defaults>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </defaults>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </hyperv>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <launchSecurity supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='sectype'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tdx</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </launchSecurity>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </features>
Nov 25 09:48:10 compute-0 nova_compute[252529]: </domainCapabilities>
Nov 25 09:48:10 compute-0 nova_compute[252529]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.654 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.656 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 09:48:10 compute-0 nova_compute[252529]: <domainCapabilities>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <domain>kvm</domain>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <arch>x86_64</arch>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <vcpu max='4096'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <iothreads supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <os supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <enum name='firmware'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>efi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <loader supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>rom</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pflash</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='readonly'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>yes</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>no</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='secure'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>yes</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>no</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </loader>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </os>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <cpu>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>on</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>off</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='maximumMigratable'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>on</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>off</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <vendor>AMD</vendor>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='succor'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='custom' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cooperlake'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Denverton'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Denverton-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='GraniteRapids'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10-128'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10-256'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10-512'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='KnightsMill'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SierraForest'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='athlon'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='athlon-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='core2duo'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='core2duo-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='coreduo'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='coreduo-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='n270'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='n270-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='phenom'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='phenom-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </cpu>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <memoryBacking supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <enum name='sourceType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>file</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>anonymous</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>memfd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </memoryBacking>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <devices>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <disk supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='diskDevice'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>disk</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>cdrom</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>floppy</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>lun</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='bus'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>fdc</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>scsi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>usb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>sata</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </disk>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <graphics supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vnc</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>egl-headless</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>dbus</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </graphics>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <video supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='modelType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vga</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>cirrus</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>none</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>bochs</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>ramfb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </video>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <hostdev supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='mode'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>subsystem</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='startupPolicy'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>default</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>mandatory</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>requisite</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>optional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='subsysType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>usb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pci</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>scsi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='capsType'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='pciBackend'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </hostdev>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <rng supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>random</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>egd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>builtin</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </rng>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <filesystem supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='driverType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>path</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>handle</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtiofs</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </filesystem>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <tpm supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tpm-tis</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tpm-crb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>emulator</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>external</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendVersion'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>2.0</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </tpm>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <redirdev supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='bus'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>usb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </redirdev>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <channel supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pty</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>unix</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </channel>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <crypto supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>qemu</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>builtin</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </crypto>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <interface supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>default</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>passt</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </interface>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <panic supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>isa</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>hyperv</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </panic>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <console supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>null</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vc</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pty</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>dev</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>file</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pipe</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>stdio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>udp</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tcp</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>unix</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>qemu-vdagent</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>dbus</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </console>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </devices>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <features>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <gic supported='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <genid supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <backup supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <async-teardown supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <ps2 supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <sev supported='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <sgx supported='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <hyperv supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='features'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>relaxed</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vapic</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>spinlocks</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vpindex</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>runtime</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>synic</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>stimer</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>reset</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vendor_id</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>frequencies</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>reenlightenment</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tlbflush</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>ipi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>avic</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>emsr_bitmap</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>xmm_input</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <defaults>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </defaults>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </hyperv>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <launchSecurity supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='sectype'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tdx</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </launchSecurity>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </features>
Nov 25 09:48:10 compute-0 nova_compute[252529]: </domainCapabilities>
Nov 25 09:48:10 compute-0 nova_compute[252529]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.699 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 09:48:10 compute-0 nova_compute[252529]: <domainCapabilities>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <domain>kvm</domain>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <arch>x86_64</arch>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <vcpu max='240'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <iothreads supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <os supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <enum name='firmware'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <loader supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>rom</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pflash</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='readonly'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>yes</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>no</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='secure'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>no</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </loader>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </os>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <cpu>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>on</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>off</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='maximumMigratable'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>on</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>off</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <vendor>AMD</vendor>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='succor'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <mode name='custom' supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cooperlake'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Denverton'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Denverton-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='auto-ibrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amd-psfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='stibp-always-on'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='GraniteRapids'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10-128'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10-256'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx10-512'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='prefetchiti'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Haswell-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='KnightsMill'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512er'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512pf'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fma4'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tbm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xop'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='amx-tile'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-bf16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-fp16'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bitalg'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrc'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fzrm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='la57'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='taa-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='xfd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SierraForest'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ifma'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cmpccxadd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fbsdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='fsrs'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ibrs-all'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mcdt-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='pbrsb-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='psdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='serialize'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='hle'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='rtm'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512bw'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512cd'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512dq'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512f'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='avx512vl'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='mpx'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='core-capability'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='split-lock-detect'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='cldemote'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='gfni'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdir64b'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='movdiri'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='athlon'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='athlon-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='core2duo'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='core2duo-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='coreduo'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='coreduo-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='n270'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='n270-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='ss'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='phenom'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <blockers model='phenom-v1'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnow'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <feature name='3dnowext'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </blockers>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </mode>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </cpu>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <memoryBacking supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <enum name='sourceType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>file</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>anonymous</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <value>memfd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </memoryBacking>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <devices>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <disk supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='diskDevice'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>disk</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>cdrom</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>floppy</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>lun</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='bus'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>ide</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>fdc</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>scsi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>usb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>sata</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </disk>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <graphics supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vnc</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>egl-headless</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>dbus</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </graphics>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <video supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='modelType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vga</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>cirrus</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>none</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>bochs</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>ramfb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </video>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <hostdev supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='mode'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>subsystem</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='startupPolicy'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>default</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>mandatory</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>requisite</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>optional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='subsysType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>usb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pci</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>scsi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='capsType'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='pciBackend'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </hostdev>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <rng supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtio-non-transitional</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>random</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>egd</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>builtin</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </rng>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <filesystem supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='driverType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>path</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>handle</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>virtiofs</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </filesystem>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <tpm supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tpm-tis</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tpm-crb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>emulator</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>external</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendVersion'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>2.0</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </tpm>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <redirdev supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='bus'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>usb</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </redirdev>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <channel supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pty</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>unix</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </channel>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <crypto supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>qemu</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendModel'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>builtin</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </crypto>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <interface supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='backendType'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>default</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>passt</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </interface>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <panic supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='model'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>isa</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>hyperv</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </panic>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <console supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='type'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>null</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vc</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pty</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>dev</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>file</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>pipe</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>stdio</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>udp</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tcp</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>unix</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>qemu-vdagent</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>dbus</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </console>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </devices>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   <features>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <gic supported='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <genid supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <backup supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <async-teardown supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <ps2 supported='yes'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <sev supported='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <sgx supported='no'/>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <hyperv supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='features'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>relaxed</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vapic</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>spinlocks</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vpindex</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>runtime</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>synic</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>stimer</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>reset</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>vendor_id</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>frequencies</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>reenlightenment</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tlbflush</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>ipi</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>avic</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>emsr_bitmap</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>xmm_input</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <defaults>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </defaults>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </hyperv>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     <launchSecurity supported='yes'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       <enum name='sectype'>
Nov 25 09:48:10 compute-0 nova_compute[252529]:         <value>tdx</value>
Nov 25 09:48:10 compute-0 nova_compute[252529]:       </enum>
Nov 25 09:48:10 compute-0 nova_compute[252529]:     </launchSecurity>
Nov 25 09:48:10 compute-0 nova_compute[252529]:   </features>
Nov 25 09:48:10 compute-0 nova_compute[252529]: </domainCapabilities>
Nov 25 09:48:10 compute-0 nova_compute[252529]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.741 252533 DEBUG nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.741 252533 INFO nova.virt.libvirt.host [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Secure Boot support detected
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.742 252533 INFO nova.virt.libvirt.driver [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.742 252533 INFO nova.virt.libvirt.driver [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.749 252533 DEBUG nova.virt.libvirt.driver [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.783 252533 INFO nova.virt.node [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Determined node identity d9873737-caae-40cc-9346-77a33537057c from /var/lib/nova/compute_id
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.844 252533 WARNING nova.compute.manager [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Compute nodes ['d9873737-caae-40cc-9346-77a33537057c'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.864 252533 INFO nova.compute.manager [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.900 252533 WARNING nova.compute.manager [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.900 252533 DEBUG oslo_concurrency.lockutils [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.901 252533 DEBUG oslo_concurrency.lockutils [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.901 252533 DEBUG oslo_concurrency.lockutils [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.901 252533 DEBUG nova.compute.resource_tracker [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:48:10 compute-0 nova_compute[252529]: 2025-11-25 09:48:10.901 252533 DEBUG oslo_concurrency.processutils [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:48:11 compute-0 sudo[253249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnmpkhqxclvcydapgtsaykvbifcalqqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064090.7892523-4333-200934433507772/AnsiballZ_podman_container.py'
Nov 25 09:48:11 compute-0 sudo[253249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:48:11 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3343906913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:11 compute-0 nova_compute[252529]: 2025-11-25 09:48:11.254 252533 DEBUG oslo_concurrency.processutils [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:48:11 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 09:48:11 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 25 09:48:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:48:11 compute-0 python3.9[253251]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 09:48:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:48:11 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:48:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:11.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:48:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3877624485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3343906913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:11 compute-0 sudo[253249]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:11 compute-0 nova_compute[252529]: 2025-11-25 09:48:11.622 252533 WARNING nova.virt.libvirt.driver [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:48:11 compute-0 nova_compute[252529]: 2025-11-25 09:48:11.623 252533 DEBUG nova.compute.resource_tracker [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4973MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:48:11 compute-0 nova_compute[252529]: 2025-11-25 09:48:11.623 252533 DEBUG oslo_concurrency.lockutils [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:48:11 compute-0 nova_compute[252529]: 2025-11-25 09:48:11.624 252533 DEBUG oslo_concurrency.lockutils [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:48:11 compute-0 nova_compute[252529]: 2025-11-25 09:48:11.641 252533 WARNING nova.compute.resource_tracker [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] No compute node record for compute-0.ctlplane.example.com:d9873737-caae-40cc-9346-77a33537057c: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host d9873737-caae-40cc-9346-77a33537057c could not be found.
Nov 25 09:48:11 compute-0 nova_compute[252529]: 2025-11-25 09:48:11.656 252533 INFO nova.compute.resource_tracker [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: d9873737-caae-40cc-9346-77a33537057c
Nov 25 09:48:11 compute-0 nova_compute[252529]: 2025-11-25 09:48:11.694 252533 DEBUG nova.compute.resource_tracker [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:48:11 compute-0 nova_compute[252529]: 2025-11-25 09:48:11.694 252533 DEBUG nova.compute.resource_tracker [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:48:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:11.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:11 compute-0 sudo[253444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcoecwtybggavcjzhvkqzqfkynuhuhei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064091.6418955-4357-132722995432660/AnsiballZ_systemd.py'
Nov 25 09:48:11 compute-0 sudo[253444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:12 compute-0 python3.9[253446]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:48:12 compute-0 systemd[1]: Stopping nova_compute container...
Nov 25 09:48:12 compute-0 nova_compute[252529]: 2025-11-25 09:48:12.167 252533 DEBUG oslo_concurrency.lockutils [None req-3b4c7280-75b3-41a4-a0e5-827a5af60050 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:48:12 compute-0 nova_compute[252529]: 2025-11-25 09:48:12.167 252533 DEBUG oslo_concurrency.lockutils [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:48:12 compute-0 nova_compute[252529]: 2025-11-25 09:48:12.167 252533 DEBUG oslo_concurrency.lockutils [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:48:12 compute-0 nova_compute[252529]: 2025-11-25 09:48:12.167 252533 DEBUG oslo_concurrency.lockutils [None req-42d48638-f213-4ba5-b501-e65e6ded6d04 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:48:12 compute-0 virtqemud[252911]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 09:48:12 compute-0 virtqemud[252911]: hostname: compute-0
Nov 25 09:48:12 compute-0 virtqemud[252911]: End of file while reading data: Input/output error
Nov 25 09:48:12 compute-0 systemd[1]: libpod-e87ef96d501600f5848bb5f6740b0329ecc8416337e73330596688b61a95aafc.scope: Deactivated successfully.
Nov 25 09:48:12 compute-0 systemd[1]: libpod-e87ef96d501600f5848bb5f6740b0329ecc8416337e73330596688b61a95aafc.scope: Consumed 2.905s CPU time.
Nov 25 09:48:12 compute-0 podman[253451]: 2025-11-25 09:48:12.411112613 +0000 UTC m=+0.269400120 container died e87ef96d501600f5848bb5f6740b0329ecc8416337e73330596688b61a95aafc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 25 09:48:12 compute-0 ceph-mon[74207]: pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:48:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e87ef96d501600f5848bb5f6740b0329ecc8416337e73330596688b61a95aafc-userdata-shm.mount: Deactivated successfully.
Nov 25 09:48:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-063d26b4076e85b4dc82e69c50903e6f6fd7eb2b26aa202d59e5d1f2e0822436-merged.mount: Deactivated successfully.
Nov 25 09:48:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:48:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:48:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:13.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:13 compute-0 podman[253451]: 2025-11-25 09:48:13.453538934 +0000 UTC m=+1.311826441 container cleanup e87ef96d501600f5848bb5f6740b0329ecc8416337e73330596688b61a95aafc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible)
Nov 25 09:48:13 compute-0 podman[253451]: nova_compute
Nov 25 09:48:13 compute-0 podman[253490]: nova_compute
Nov 25 09:48:13 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 25 09:48:13 compute-0 systemd[1]: Stopped nova_compute container.
Nov 25 09:48:13 compute-0 systemd[1]: Starting nova_compute container...
Nov 25 09:48:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063d26b4076e85b4dc82e69c50903e6f6fd7eb2b26aa202d59e5d1f2e0822436/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063d26b4076e85b4dc82e69c50903e6f6fd7eb2b26aa202d59e5d1f2e0822436/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063d26b4076e85b4dc82e69c50903e6f6fd7eb2b26aa202d59e5d1f2e0822436/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063d26b4076e85b4dc82e69c50903e6f6fd7eb2b26aa202d59e5d1f2e0822436/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063d26b4076e85b4dc82e69c50903e6f6fd7eb2b26aa202d59e5d1f2e0822436/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:13 compute-0 podman[253500]: 2025-11-25 09:48:13.606090563 +0000 UTC m=+0.070104644 container init e87ef96d501600f5848bb5f6740b0329ecc8416337e73330596688b61a95aafc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:48:13 compute-0 podman[253500]: 2025-11-25 09:48:13.609959949 +0000 UTC m=+0.073974010 container start e87ef96d501600f5848bb5f6740b0329ecc8416337e73330596688b61a95aafc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 09:48:13 compute-0 podman[253500]: nova_compute
Nov 25 09:48:13 compute-0 nova_compute[253512]: + sudo -E kolla_set_configs
Nov 25 09:48:13 compute-0 systemd[1]: Started nova_compute container.
Nov 25 09:48:13 compute-0 sudo[253444]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Validating config file
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Copying service configuration files
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Deleting /etc/ceph
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Creating directory /etc/ceph
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Writing out command to execute
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:13 compute-0 nova_compute[253512]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 09:48:13 compute-0 nova_compute[253512]: ++ cat /run_command
Nov 25 09:48:13 compute-0 nova_compute[253512]: + CMD=nova-compute
Nov 25 09:48:13 compute-0 nova_compute[253512]: + ARGS=
Nov 25 09:48:13 compute-0 nova_compute[253512]: + sudo kolla_copy_cacerts
Nov 25 09:48:13 compute-0 nova_compute[253512]: + [[ ! -n '' ]]
Nov 25 09:48:13 compute-0 nova_compute[253512]: + . kolla_extend_start
Nov 25 09:48:13 compute-0 nova_compute[253512]: Running command: 'nova-compute'
Nov 25 09:48:13 compute-0 nova_compute[253512]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 09:48:13 compute-0 nova_compute[253512]: + umask 0022
Nov 25 09:48:13 compute-0 nova_compute[253512]: + exec nova-compute
Nov 25 09:48:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:13.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:14 compute-0 sudo[253675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avijwzjfgqczpecqfdcmlellpevhhfha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764064093.8159778-4384-66242427884456/AnsiballZ_podman_container.py'
Nov 25 09:48:14 compute-0 sudo[253675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:14 compute-0 python3.9[253677]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 09:48:14 compute-0 systemd[1]: Started libpod-conmon-a4538bf8fe4f6c62f44a200a54e54126b3f9307b40a0e9865c19c88412037212.scope.
Nov 25 09:48:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:48:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883572f6e2592b76bcfe55229b5ccb59fde9443a4bcb4b69c2d3582caf0476c6/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883572f6e2592b76bcfe55229b5ccb59fde9443a4bcb4b69c2d3582caf0476c6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883572f6e2592b76bcfe55229b5ccb59fde9443a4bcb4b69c2d3582caf0476c6/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:14 compute-0 podman[253697]: 2025-11-25 09:48:14.375065728 +0000 UTC m=+0.085400285 container init a4538bf8fe4f6c62f44a200a54e54126b3f9307b40a0e9865c19c88412037212 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 25 09:48:14 compute-0 podman[253697]: 2025-11-25 09:48:14.381444085 +0000 UTC m=+0.091778642 container start a4538bf8fe4f6c62f44a200a54e54126b3f9307b40a0e9865c19c88412037212 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 09:48:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:14 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900001550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:14 compute-0 python3.9[253677]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Applying nova statedir ownership
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 25 09:48:14 compute-0 nova_compute_init[253717]: INFO:nova_statedir:Nova statedir ownership complete
Nov 25 09:48:14 compute-0 systemd[1]: libpod-a4538bf8fe4f6c62f44a200a54e54126b3f9307b40a0e9865c19c88412037212.scope: Deactivated successfully.
Nov 25 09:48:14 compute-0 ceph-mon[74207]: pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.443610) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094443679, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4197, "num_deletes": 502, "total_data_size": 8561615, "memory_usage": 8705384, "flush_reason": "Manual Compaction"}
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094461665, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 8307853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13140, "largest_seqno": 17335, "table_properties": {"data_size": 8290133, "index_size": 11974, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36516, "raw_average_key_size": 19, "raw_value_size": 8253673, "raw_average_value_size": 4449, "num_data_blocks": 524, "num_entries": 1855, "num_filter_entries": 1855, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063651, "oldest_key_time": 1764063651, "file_creation_time": 1764064094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 18244 microseconds, and 10576 cpu microseconds.
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.461862) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 8307853 bytes OK
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.461971) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.462725) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.462737) EVENT_LOG_v1 {"time_micros": 1764064094462734, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.462750) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8544850, prev total WAL file size 8544850, number of live WAL files 2.
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.464283) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(8113KB)], [32(11MB)]
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094464307, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 20016525, "oldest_snapshot_seqno": -1}
Nov 25 09:48:14 compute-0 podman[253728]: 2025-11-25 09:48:14.489016815 +0000 UTC m=+0.053693231 container died a4538bf8fe4f6c62f44a200a54e54126b3f9307b40a0e9865c19c88412037212 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:48:14 compute-0 sudo[253675]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5044 keys, 15136086 bytes, temperature: kUnknown
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094501107, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15136086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15097637, "index_size": 24707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 125908, "raw_average_key_size": 24, "raw_value_size": 15001578, "raw_average_value_size": 2974, "num_data_blocks": 1042, "num_entries": 5044, "num_filter_entries": 5044, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764064094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.501384) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15136086 bytes
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.501795) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 541.8 rd, 409.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.9, 11.2 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(4.2) write-amplify(1.8) OK, records in: 6066, records dropped: 1022 output_compression: NoCompression
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.501810) EVENT_LOG_v1 {"time_micros": 1764064094501802, "job": 14, "event": "compaction_finished", "compaction_time_micros": 36945, "compaction_time_cpu_micros": 20926, "output_level": 6, "num_output_files": 1, "total_output_size": 15136086, "num_input_records": 6066, "num_output_records": 5044, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094503134, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094504499, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.464252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.504586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.504589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.504590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.504591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:48:14 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:48:14.504592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:48:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a4538bf8fe4f6c62f44a200a54e54126b3f9307b40a0e9865c19c88412037212-userdata-shm.mount: Deactivated successfully.
Nov 25 09:48:14 compute-0 podman[253728]: 2025-11-25 09:48:14.515849038 +0000 UTC m=+0.080525443 container cleanup a4538bf8fe4f6c62f44a200a54e54126b3f9307b40a0e9865c19c88412037212 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Nov 25 09:48:14 compute-0 systemd[1]: libpod-conmon-a4538bf8fe4f6c62f44a200a54e54126b3f9307b40a0e9865c19c88412037212.scope: Deactivated successfully.
Nov 25 09:48:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-883572f6e2592b76bcfe55229b5ccb59fde9443a4bcb4b69c2d3582caf0476c6-merged.mount: Deactivated successfully.
Nov 25 09:48:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:14 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900001550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:48:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:48:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:48:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:48:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:48:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:48:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:48:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:48:15 compute-0 sshd-session[223490]: Connection closed by 192.168.122.30 port 55894
Nov 25 09:48:15 compute-0 sshd-session[223487]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:48:15 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Nov 25 09:48:15 compute-0 systemd[1]: session-54.scope: Consumed 1min 39.468s CPU time.
Nov 25 09:48:15 compute-0 systemd-logind[744]: Session 54 logged out. Waiting for processes to exit.
Nov 25 09:48:15 compute-0 systemd-logind[744]: Removed session 54.
Nov 25 09:48:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.343 253516 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.343 253516 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.343 253516 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.344 253516 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 25 09:48:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:15.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094815 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:48:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:15 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.453 253516 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.462 253516 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.463 253516 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 25 09:48:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:15.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.865 253516 INFO nova.virt.driver [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.953 253516 INFO nova.compute.provider_config [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.962 253516 DEBUG oslo_concurrency.lockutils [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.963 253516 DEBUG oslo_concurrency.lockutils [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.963 253516 DEBUG oslo_concurrency.lockutils [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.963 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.963 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.964 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.964 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.964 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.964 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.964 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.964 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.964 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.964 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.965 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.965 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.965 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.965 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.965 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.965 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.966 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.966 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.966 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.966 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.966 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.966 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.966 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.967 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.967 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.967 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.967 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.967 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.967 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.968 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.968 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.968 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.968 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.968 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.968 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.968 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.969 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.969 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.969 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.969 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.969 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.969 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.969 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.970 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.970 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.970 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.970 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.970 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.970 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.970 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.971 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.971 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.971 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.971 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.971 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.971 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.971 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.972 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.972 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.972 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.972 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.972 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.972 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.972 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.972 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.973 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.973 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.973 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.973 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.973 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.973 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.973 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.974 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.974 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.974 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.974 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.974 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.974 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.974 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.975 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.975 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.975 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.975 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.975 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.975 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.975 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.976 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.976 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.976 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.976 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.976 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.976 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.976 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.977 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.977 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.977 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.977 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.977 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.977 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.977 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.978 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.978 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.978 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.978 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.978 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.978 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.978 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.979 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.979 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.979 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.979 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.979 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.979 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.979 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.979 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.980 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.980 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.980 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.980 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.980 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.980 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.981 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.981 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.981 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.981 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.981 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.981 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.981 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.982 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.982 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.982 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.982 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.982 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.982 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.982 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.983 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.983 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.983 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.983 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.983 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.983 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.983 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.984 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.984 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.984 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.984 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.984 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.984 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.984 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.985 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.985 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.985 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.985 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.985 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.985 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.985 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.986 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.986 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.986 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.986 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.986 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.986 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.986 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.987 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.987 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.987 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.987 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.987 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.987 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.987 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.988 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.988 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.988 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.988 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.988 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.988 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.988 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.989 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.989 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.989 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.989 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.989 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.989 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.989 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.990 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.990 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.990 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.990 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.990 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.990 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.991 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.991 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.991 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.991 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.991 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.992 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.992 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.992 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.992 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.993 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.993 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.993 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.993 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.993 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.993 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.994 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.994 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.994 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.994 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.994 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.994 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.995 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.995 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.995 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.995 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.995 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.996 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.996 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.996 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.996 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.996 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.996 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.997 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.997 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.997 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.997 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.997 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.998 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.998 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.998 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.998 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.998 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.998 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.999 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:15 compute-0 nova_compute[253512]: 2025-11-25 09:48:15.999 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.000 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 podman[253779]: 2025-11-25 09:48:16.000381828 +0000 UTC m=+0.062007317 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.000 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.000 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.000 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.001 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.001 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.001 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.001 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.002 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.002 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.002 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.002 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.002 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.003 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.003 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.003 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.003 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.003 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.003 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.004 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.004 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.004 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.004 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.004 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.004 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.005 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.005 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.005 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.005 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.005 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.006 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.006 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.006 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.006 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.006 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.006 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.007 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.007 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.007 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.007 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.007 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.008 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.008 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.008 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.008 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.008 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.008 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.009 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.009 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.009 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.009 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.009 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.010 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.010 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.010 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.010 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.010 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.010 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.011 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.011 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.011 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.011 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.011 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.011 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.012 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.012 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.012 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.012 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.012 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.013 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.013 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.013 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.013 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.013 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.013 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.014 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.014 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.014 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.014 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.014 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.014 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.015 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.015 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.015 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.015 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.015 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.016 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.016 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.016 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.016 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.016 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.016 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.017 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.017 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.017 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.017 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.017 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.018 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.018 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.018 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.018 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.018 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.018 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.019 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.019 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.019 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.019 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.019 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.020 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.020 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.020 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.020 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.020 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.020 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.021 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.021 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.021 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.021 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.021 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.021 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.022 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.022 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.022 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.022 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.023 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.023 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.023 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.023 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.023 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.023 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.024 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.024 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.024 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.024 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.024 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.025 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.025 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.025 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.025 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.025 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.025 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.026 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.026 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.026 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.026 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.026 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.026 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.027 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.027 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.027 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.027 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.027 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.028 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.028 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.028 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.028 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.028 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.028 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.029 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.029 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.029 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.029 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.029 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.029 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.030 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.030 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.030 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.030 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.030 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.031 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.031 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.031 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.031 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.031 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.032 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.032 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.032 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.032 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.032 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.032 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.033 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.033 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.033 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.033 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.033 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.033 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.034 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.034 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.034 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.034 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.034 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.035 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.035 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.035 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.035 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.035 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.035 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.036 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.036 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.036 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.036 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.036 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.036 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.037 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.037 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.037 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.037 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.037 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.038 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.038 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.038 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.038 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.038 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.038 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.039 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.039 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.039 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.039 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.039 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.039 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.040 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.040 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.040 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.040 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.040 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.041 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.041 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.041 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.041 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.041 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.041 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.042 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.042 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.042 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.042 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.042 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.042 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.043 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.043 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.043 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.043 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.043 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.044 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.044 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.044 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.044 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.044 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.044 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.045 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.045 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.045 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.045 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.045 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.045 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.046 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.046 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.046 253516 WARNING oslo_config.cfg [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 09:48:16 compute-0 nova_compute[253512]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 09:48:16 compute-0 nova_compute[253512]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 09:48:16 compute-0 nova_compute[253512]: and ``live_migration_inbound_addr`` respectively.
Nov 25 09:48:16 compute-0 nova_compute[253512]: ).  Its value may be silently ignored in the future.
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.046 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.047 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.047 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.047 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.047 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.047 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.047 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.048 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.048 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.048 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.048 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.048 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.049 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.049 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.049 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.049 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.049 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.050 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.050 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.rbd_secret_uuid        = af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.050 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.050 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.050 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.050 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.051 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.051 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.051 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.051 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.051 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.051 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.052 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.052 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.052 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.052 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.052 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.053 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.053 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.053 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.053 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.053 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.054 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.054 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.054 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.054 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.054 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.054 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.055 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.055 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.055 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.055 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.055 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.056 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.056 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.056 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.056 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.056 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.056 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.057 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.057 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.057 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.057 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.057 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.057 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.058 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.058 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.058 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.058 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.058 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.058 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.059 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.059 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.059 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.059 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.059 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.060 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.060 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.060 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.060 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.060 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.060 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.061 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.061 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.061 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.061 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.061 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.061 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.062 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.062 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.062 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.062 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.062 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.063 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.063 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.063 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.063 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.063 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.063 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.064 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.064 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.064 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.064 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.064 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.065 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.065 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.065 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.065 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.065 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.065 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.066 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.066 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.066 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.066 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.066 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.066 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.067 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.067 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.067 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.067 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.067 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.068 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.068 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.068 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.068 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.068 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.069 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.069 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.069 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.069 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.069 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.069 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.070 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.070 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.070 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.070 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.070 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.071 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.071 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.071 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.071 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.071 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.071 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.072 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.072 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.072 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.072 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.073 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.073 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.073 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.073 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.073 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.074 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.074 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.074 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.074 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.074 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.074 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.075 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.075 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.075 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.075 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.075 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.075 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.076 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.076 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.076 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.076 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.076 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.077 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.077 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.077 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.077 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.077 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.077 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.078 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.078 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.078 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.078 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.078 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.078 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.079 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.079 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.079 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.079 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.079 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.080 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.080 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.080 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.080 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.080 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.081 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.081 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.081 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.081 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.081 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.081 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.082 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.082 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.082 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.082 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.082 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.083 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.083 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.083 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.083 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.083 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.083 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.084 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.084 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.084 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.084 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.084 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.085 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.085 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.085 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.085 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.085 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.085 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.086 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.086 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.086 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.086 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.086 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.086 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.087 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.087 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.087 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.087 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.087 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.087 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.088 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.088 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.088 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.088 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.088 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.089 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.089 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.089 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.089 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.089 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.089 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.090 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.090 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.090 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.090 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.090 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.090 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.091 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.091 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.091 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.091 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.091 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.092 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.092 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.092 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.092 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.092 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.093 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.093 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.093 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.093 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.093 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.093 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.094 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.094 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.094 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.094 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.094 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.095 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.095 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.095 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.095 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.095 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.095 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.096 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.096 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.096 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.096 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.096 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.096 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.097 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.097 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.097 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.097 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.097 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.098 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.098 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.098 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.098 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.098 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.098 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.099 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.099 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.099 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.099 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.099 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.100 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.100 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.100 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.100 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.101 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.101 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.101 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.101 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.101 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.101 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.102 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.102 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.102 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.102 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.102 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.103 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.103 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.103 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.103 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.103 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.103 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.104 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.104 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.104 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.104 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.104 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.104 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.105 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.105 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.105 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.105 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.105 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.106 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.106 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.106 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.106 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.106 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.106 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.107 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.107 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.107 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.107 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.107 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.108 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.108 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.108 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.108 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.108 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.109 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.109 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.109 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.109 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.109 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.109 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.110 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.110 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.110 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.110 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.110 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.110 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.111 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.111 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.111 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.111 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.111 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.111 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.112 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.112 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.112 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.112 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.112 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.113 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.113 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.113 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.113 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.113 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.113 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.114 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.114 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.114 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.114 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.114 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.114 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.115 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.115 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.115 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.116 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.116 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.116 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.116 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.116 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.116 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.117 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.117 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.117 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.117 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.117 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.118 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.118 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.118 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.118 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.118 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.118 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.119 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.119 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.119 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.119 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.119 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.119 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.120 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.120 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.120 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.120 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.120 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.121 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.121 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.121 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.121 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.121 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.121 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.122 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.122 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.122 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.122 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.122 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.122 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.123 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.123 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.123 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.123 253516 DEBUG oslo_service.service [None req-a1a35ed0-6d91-4b0d-9180-9265ac2988de - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.124 253516 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.181 253516 INFO nova.virt.node [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Determined node identity d9873737-caae-40cc-9346-77a33537057c from /var/lib/nova/compute_id
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.181 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.182 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.182 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.182 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.190 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f87438b7640> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.192 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f87438b7640> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.192 253516 INFO nova.virt.libvirt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Connection event '1' reason 'None'
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.196 253516 INFO nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 09:48:16 compute-0 nova_compute[253512]: 
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <host>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <uuid>0f2c6148-bac3-4049-9f53-233f21cb16c0</uuid>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <cpu>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <arch>x86_64</arch>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model>EPYC-Milan-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <vendor>AMD</vendor>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <microcode version='167776725'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <signature family='25' model='1' stepping='1'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <maxphysaddr mode='emulate' bits='48'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='x2apic'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='tsc-deadline'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='osxsave'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='hypervisor'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='tsc_adjust'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='ospke'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='vaes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='vpclmulqdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='spec-ctrl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='stibp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='arch-capabilities'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='ssbd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='cmp_legacy'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='virt-ssbd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='lbrv'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='tsc-scale'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='vmcb-clean'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='pause-filter'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='pfthreshold'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='v-vmsave-vmload'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='vgif'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='rdctl-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='skip-l1dfl-vmentry'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='mds-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature name='pschange-mc-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <pages unit='KiB' size='4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <pages unit='KiB' size='2048'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <pages unit='KiB' size='1048576'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </cpu>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <power_management>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <suspend_mem/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </power_management>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <iommu support='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <migration_features>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <live/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <uri_transports>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <uri_transport>tcp</uri_transport>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <uri_transport>rdma</uri_transport>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </uri_transports>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </migration_features>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <topology>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <cells num='1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <cell id='0'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:           <memory unit='KiB'>7865360</memory>
Nov 25 09:48:16 compute-0 nova_compute[253512]:           <pages unit='KiB' size='4'>1966340</pages>
Nov 25 09:48:16 compute-0 nova_compute[253512]:           <pages unit='KiB' size='2048'>0</pages>
Nov 25 09:48:16 compute-0 nova_compute[253512]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 25 09:48:16 compute-0 nova_compute[253512]:           <distances>
Nov 25 09:48:16 compute-0 nova_compute[253512]:             <sibling id='0' value='10'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:           </distances>
Nov 25 09:48:16 compute-0 nova_compute[253512]:           <cpus num='4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:           </cpus>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         </cell>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </cells>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </topology>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <cache>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </cache>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <secmodel>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model>selinux</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <doi>0</doi>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </secmodel>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <secmodel>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model>dac</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <doi>0</doi>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </secmodel>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </host>
Nov 25 09:48:16 compute-0 nova_compute[253512]: 
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <guest>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <os_type>hvm</os_type>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <arch name='i686'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <wordsize>32</wordsize>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <domain type='qemu'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <domain type='kvm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </arch>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <features>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <pae/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <nonpae/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <acpi default='on' toggle='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <apic default='on' toggle='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <cpuselection/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <deviceboot/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <disksnapshot default='on' toggle='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <externalSnapshot/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </features>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </guest>
Nov 25 09:48:16 compute-0 nova_compute[253512]: 
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <guest>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <os_type>hvm</os_type>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <arch name='x86_64'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <wordsize>64</wordsize>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <domain type='qemu'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <domain type='kvm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </arch>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <features>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <acpi default='on' toggle='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <apic default='on' toggle='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <cpuselection/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <deviceboot/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <disksnapshot default='on' toggle='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <externalSnapshot/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </features>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </guest>
Nov 25 09:48:16 compute-0 nova_compute[253512]: 
Nov 25 09:48:16 compute-0 nova_compute[253512]: </capabilities>
Nov 25 09:48:16 compute-0 nova_compute[253512]: 
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.202 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.205 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 09:48:16 compute-0 nova_compute[253512]: <domainCapabilities>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <domain>kvm</domain>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <arch>i686</arch>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <vcpu max='240'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <iothreads supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <os supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <enum name='firmware'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <loader supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>rom</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pflash</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='readonly'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>yes</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>no</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='secure'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>no</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </loader>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </os>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <cpu>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>on</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>off</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='maximumMigratable'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>on</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>off</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <vendor>AMD</vendor>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='succor'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='custom' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cooperlake'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Denverton'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Denverton-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amd-psfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='auto-ibrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='stibp-always-on'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amd-psfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='auto-ibrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='stibp-always-on'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amd-psfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='stibp-always-on'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='GraniteRapids'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='prefetchiti'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='prefetchiti'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10-128'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10-256'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10-512'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='prefetchiti'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='KnightsMill'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512er'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512pf'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512er'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512pf'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tbm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tbm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SierraForest'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cmpccxadd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cmpccxadd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='athlon'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='athlon-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='core2duo'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='core2duo-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='coreduo'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='coreduo-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='n270'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='n270-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='phenom'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='phenom-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </cpu>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <memoryBacking supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <enum name='sourceType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>file</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>anonymous</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>memfd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </memoryBacking>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <devices>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <disk supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='diskDevice'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>disk</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>cdrom</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>floppy</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>lun</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='bus'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>ide</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>fdc</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>scsi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>usb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>sata</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-non-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <graphics supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vnc</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>egl-headless</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>dbus</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </graphics>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <video supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='modelType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vga</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>cirrus</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>none</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>bochs</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>ramfb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </video>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <hostdev supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='mode'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>subsystem</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='startupPolicy'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>default</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>mandatory</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>requisite</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>optional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='subsysType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>usb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pci</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>scsi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='capsType'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='pciBackend'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </hostdev>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <rng supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-non-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendModel'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>random</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>egd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>builtin</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </rng>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <filesystem supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='driverType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>path</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>handle</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtiofs</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </filesystem>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <tpm supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tpm-tis</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tpm-crb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendModel'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>emulator</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>external</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendVersion'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>2.0</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </tpm>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <redirdev supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='bus'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>usb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </redirdev>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <channel supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pty</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>unix</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </channel>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <crypto supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>qemu</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendModel'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>builtin</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </crypto>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <interface supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>default</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>passt</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </interface>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <panic supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>isa</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>hyperv</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </panic>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <console supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>null</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vc</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pty</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>dev</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>file</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pipe</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>stdio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>udp</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tcp</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>unix</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>qemu-vdagent</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>dbus</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </console>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </devices>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <features>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <gic supported='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <genid supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <backup supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <async-teardown supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <ps2 supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <sev supported='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <sgx supported='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <hyperv supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='features'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>relaxed</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vapic</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>spinlocks</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vpindex</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>runtime</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>synic</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>stimer</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>reset</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vendor_id</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>frequencies</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>reenlightenment</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tlbflush</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>ipi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>avic</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>emsr_bitmap</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>xmm_input</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <defaults>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </defaults>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </hyperv>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <launchSecurity supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='sectype'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tdx</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </launchSecurity>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </features>
Nov 25 09:48:16 compute-0 nova_compute[253512]: </domainCapabilities>
Nov 25 09:48:16 compute-0 nova_compute[253512]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.210 253516 DEBUG nova.virt.libvirt.volume.mount [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.210 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 09:48:16 compute-0 nova_compute[253512]: <domainCapabilities>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <domain>kvm</domain>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <arch>i686</arch>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <vcpu max='4096'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <iothreads supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <os supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <enum name='firmware'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <loader supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>rom</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pflash</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='readonly'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>yes</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>no</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='secure'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>no</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </loader>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </os>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <cpu>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>on</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>off</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='maximumMigratable'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>on</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>off</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <vendor>AMD</vendor>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='succor'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='custom' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cooperlake'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Denverton'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Denverton-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amd-psfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='auto-ibrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='stibp-always-on'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amd-psfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='auto-ibrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='stibp-always-on'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amd-psfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='stibp-always-on'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='GraniteRapids'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='prefetchiti'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='prefetchiti'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10-128'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10-256'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10-512'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='prefetchiti'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='KnightsMill'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512er'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512pf'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512er'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512pf'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tbm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tbm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SierraForest'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cmpccxadd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cmpccxadd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='athlon'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='athlon-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='core2duo'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='core2duo-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='coreduo'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='coreduo-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='n270'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='n270-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='phenom'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='phenom-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </cpu>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <memoryBacking supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <enum name='sourceType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>file</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>anonymous</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>memfd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </memoryBacking>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <devices>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <disk supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='diskDevice'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>disk</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>cdrom</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>floppy</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>lun</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='bus'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>fdc</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>scsi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>usb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>sata</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-non-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <graphics supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vnc</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>egl-headless</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>dbus</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </graphics>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <video supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='modelType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vga</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>cirrus</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>none</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>bochs</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>ramfb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </video>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <hostdev supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='mode'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>subsystem</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='startupPolicy'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>default</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>mandatory</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>requisite</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>optional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='subsysType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>usb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pci</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>scsi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='capsType'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='pciBackend'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </hostdev>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <rng supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-non-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendModel'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>random</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>egd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>builtin</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </rng>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <filesystem supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='driverType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>path</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>handle</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtiofs</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </filesystem>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <tpm supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tpm-tis</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tpm-crb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendModel'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>emulator</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>external</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendVersion'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>2.0</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </tpm>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <redirdev supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='bus'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>usb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </redirdev>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <channel supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pty</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>unix</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </channel>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <crypto supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>qemu</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendModel'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>builtin</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </crypto>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <interface supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>default</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>passt</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </interface>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <panic supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>isa</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>hyperv</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </panic>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <console supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>null</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vc</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pty</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>dev</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>file</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pipe</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>stdio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>udp</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tcp</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>unix</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>qemu-vdagent</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>dbus</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </console>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </devices>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <features>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <gic supported='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <genid supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <backup supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <async-teardown supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <ps2 supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <sev supported='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <sgx supported='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <hyperv supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='features'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>relaxed</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vapic</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>spinlocks</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vpindex</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>runtime</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>synic</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>stimer</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>reset</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vendor_id</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>frequencies</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>reenlightenment</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tlbflush</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>ipi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>avic</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>emsr_bitmap</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>xmm_input</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <defaults>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </defaults>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </hyperv>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <launchSecurity supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='sectype'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tdx</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </launchSecurity>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </features>
Nov 25 09:48:16 compute-0 nova_compute[253512]: </domainCapabilities>
Nov 25 09:48:16 compute-0 nova_compute[253512]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.212 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.226 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 09:48:16 compute-0 nova_compute[253512]: <domainCapabilities>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <domain>kvm</domain>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <arch>x86_64</arch>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <vcpu max='4096'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <iothreads supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <os supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <enum name='firmware'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>efi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <loader supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>rom</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pflash</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='readonly'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>yes</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>no</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='secure'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>yes</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>no</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </loader>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </os>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <cpu>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>on</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>off</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='maximumMigratable'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>on</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>off</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <vendor>AMD</vendor>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='succor'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='custom' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cooperlake'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Denverton'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Denverton-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amd-psfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='auto-ibrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='stibp-always-on'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amd-psfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='auto-ibrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='stibp-always-on'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amd-psfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='stibp-always-on'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='GraniteRapids'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='prefetchiti'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='prefetchiti'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10-128'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10-256'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10-512'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='prefetchiti'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='KnightsMill'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512er'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512pf'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512er'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512pf'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tbm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tbm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SierraForest'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cmpccxadd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cmpccxadd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='athlon'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='athlon-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='core2duo'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='core2duo-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='coreduo'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='coreduo-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='n270'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='n270-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='phenom'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='phenom-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </cpu>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <memoryBacking supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <enum name='sourceType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>file</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>anonymous</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>memfd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </memoryBacking>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <devices>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <disk supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='diskDevice'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>disk</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>cdrom</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>floppy</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>lun</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='bus'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>fdc</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>scsi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>usb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>sata</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-non-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <graphics supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vnc</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>egl-headless</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>dbus</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </graphics>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <video supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='modelType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vga</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>cirrus</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>none</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>bochs</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>ramfb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </video>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <hostdev supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='mode'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>subsystem</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='startupPolicy'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>default</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>mandatory</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>requisite</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>optional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='subsysType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>usb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pci</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>scsi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='capsType'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='pciBackend'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </hostdev>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <rng supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-non-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendModel'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>random</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>egd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>builtin</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </rng>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <filesystem supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='driverType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>path</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>handle</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtiofs</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </filesystem>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <tpm supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tpm-tis</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tpm-crb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendModel'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>emulator</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>external</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendVersion'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>2.0</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </tpm>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <redirdev supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='bus'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>usb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </redirdev>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <channel supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pty</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>unix</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </channel>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <crypto supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>qemu</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendModel'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>builtin</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </crypto>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <interface supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>default</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>passt</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </interface>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <panic supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>isa</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>hyperv</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </panic>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <console supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>null</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vc</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pty</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>dev</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>file</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pipe</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>stdio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>udp</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tcp</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>unix</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>qemu-vdagent</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>dbus</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </console>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </devices>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <features>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <gic supported='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <genid supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <backup supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <async-teardown supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <ps2 supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <sev supported='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <sgx supported='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <hyperv supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='features'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>relaxed</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vapic</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>spinlocks</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vpindex</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>runtime</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>synic</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>stimer</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>reset</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vendor_id</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>frequencies</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>reenlightenment</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tlbflush</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>ipi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>avic</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>emsr_bitmap</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>xmm_input</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <defaults>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </defaults>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </hyperv>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <launchSecurity supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='sectype'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tdx</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </launchSecurity>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </features>
Nov 25 09:48:16 compute-0 nova_compute[253512]: </domainCapabilities>
Nov 25 09:48:16 compute-0 nova_compute[253512]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.253 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 09:48:16 compute-0 nova_compute[253512]: <domainCapabilities>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <domain>kvm</domain>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <arch>x86_64</arch>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <vcpu max='240'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <iothreads supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <os supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <enum name='firmware'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <loader supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>rom</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pflash</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='readonly'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>yes</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>no</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='secure'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>no</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </loader>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </os>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <cpu>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='host-passthrough' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='hostPassthroughMigratable'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>on</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>off</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='maximum' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='maximumMigratable'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>on</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>off</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='host-model' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <vendor>AMD</vendor>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='x2apic'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='hypervisor'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vaes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='stibp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='ssbd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='overflow-recov'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='succor'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='lbrv'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='tsc-scale'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='flushbyasid'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='pause-filter'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='pfthreshold'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='vgif'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <mode name='custom' supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Broadwell-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cooperlake'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cooperlake-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Cooperlake-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Denverton'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Denverton-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='EPYC-Genoa'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amd-psfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='auto-ibrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='stibp-always-on'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amd-psfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='auto-ibrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='stibp-always-on'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='EPYC-Milan-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amd-psfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='no-nested-data-bp'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='null-sel-clr-base'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='stibp-always-on'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='GraniteRapids'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='prefetchiti'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='GraniteRapids-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='prefetchiti'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='GraniteRapids-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10-128'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10-256'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx10-512'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='prefetchiti'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Haswell-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v6'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Icelake-Server-v7'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='KnightsMill'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512er'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512pf'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='KnightsMill-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4fmaps'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-4vnniw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512er'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512pf'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G4-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tbm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Opteron_G5-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fma4'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tbm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xop'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SapphireRapids-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='amx-tile'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-bf16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-fp16'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512-vpopcntdq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bitalg'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vbmi2'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrc'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fzrm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='la57'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='taa-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='tsx-ldtrk'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='xfd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SierraForest'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cmpccxadd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='SierraForest-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ifma'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-ne-convert'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx-vnni-int8'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='bus-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cmpccxadd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fbsdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='fsrs'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ibrs-all'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mcdt-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='pbrsb-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='psdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='sbdr-ssdp-no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='serialize'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Client-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='hle'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='rtm'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Skylake-Server-v5'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512bw'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512cd'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512dq'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512f'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='avx512vl'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='mpx'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v2'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v3'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='core-capability'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='split-lock-detect'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='Snowridge-v4'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='cldemote'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='gfni'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdir64b'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='movdiri'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='athlon'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='athlon-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='core2duo'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='core2duo-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='coreduo'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='coreduo-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='n270'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='n270-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='ss'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='phenom'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <blockers model='phenom-v1'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnow'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <feature name='3dnowext'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </blockers>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </mode>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </cpu>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <memoryBacking supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <enum name='sourceType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>file</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>anonymous</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <value>memfd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </memoryBacking>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <devices>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <disk supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='diskDevice'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>disk</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>cdrom</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>floppy</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>lun</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='bus'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>ide</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>fdc</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>scsi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>usb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>sata</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-non-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <graphics supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vnc</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>egl-headless</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>dbus</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </graphics>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <video supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='modelType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vga</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>cirrus</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>none</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>bochs</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>ramfb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </video>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <hostdev supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='mode'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>subsystem</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='startupPolicy'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>default</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>mandatory</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>requisite</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>optional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='subsysType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>usb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pci</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>scsi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='capsType'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='pciBackend'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </hostdev>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <rng supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtio-non-transitional</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendModel'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>random</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>egd</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>builtin</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </rng>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <filesystem supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='driverType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>path</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>handle</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>virtiofs</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </filesystem>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <tpm supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tpm-tis</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tpm-crb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendModel'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>emulator</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>external</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendVersion'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>2.0</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </tpm>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <redirdev supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='bus'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>usb</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </redirdev>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <channel supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pty</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>unix</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </channel>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <crypto supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>qemu</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendModel'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>builtin</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </crypto>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <interface supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='backendType'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>default</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>passt</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </interface>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <panic supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='model'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>isa</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>hyperv</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </panic>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <console supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='type'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>null</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vc</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pty</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>dev</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>file</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>pipe</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>stdio</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>udp</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tcp</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>unix</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>qemu-vdagent</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>dbus</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </console>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </devices>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   <features>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <gic supported='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <vmcoreinfo supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <genid supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <backingStoreInput supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <backup supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <async-teardown supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <ps2 supported='yes'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <sev supported='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <sgx supported='no'/>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <hyperv supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='features'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>relaxed</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vapic</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>spinlocks</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vpindex</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>runtime</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>synic</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>stimer</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>reset</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>vendor_id</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>frequencies</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>reenlightenment</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tlbflush</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>ipi</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>avic</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>emsr_bitmap</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>xmm_input</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <defaults>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <spinlocks>4095</spinlocks>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <stimer_direct>on</stimer_direct>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </defaults>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </hyperv>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     <launchSecurity supported='yes'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       <enum name='sectype'>
Nov 25 09:48:16 compute-0 nova_compute[253512]:         <value>tdx</value>
Nov 25 09:48:16 compute-0 nova_compute[253512]:       </enum>
Nov 25 09:48:16 compute-0 nova_compute[253512]:     </launchSecurity>
Nov 25 09:48:16 compute-0 nova_compute[253512]:   </features>
Nov 25 09:48:16 compute-0 nova_compute[253512]: </domainCapabilities>
Nov 25 09:48:16 compute-0 nova_compute[253512]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.294 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.294 253516 INFO nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Secure Boot support detected
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.295 253516 INFO nova.virt.libvirt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.295 253516 INFO nova.virt.libvirt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.302 253516 DEBUG nova.virt.libvirt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.314 253516 INFO nova.virt.node [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Determined node identity d9873737-caae-40cc-9346-77a33537057c from /var/lib/nova/compute_id
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.322 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Verified node d9873737-caae-40cc-9346-77a33537057c matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.334 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 25 09:48:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:16 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c001ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.395 253516 DEBUG oslo_concurrency.lockutils [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.395 253516 DEBUG oslo_concurrency.lockutils [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.395 253516 DEBUG oslo_concurrency.lockutils [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.395 253516 DEBUG nova.compute.resource_tracker [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.395 253516 DEBUG oslo_concurrency.processutils [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:48:16 compute-0 ceph-mon[74207]: pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:48:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3733969013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:48:16 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114636363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.734 253516 DEBUG oslo_concurrency.processutils [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:48:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:16 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c001ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.932 253516 WARNING nova.virt.libvirt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.933 253516 DEBUG nova.compute.resource_tracker [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4949MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.933 253516 DEBUG oslo_concurrency.lockutils [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:48:16 compute-0 nova_compute[253512]: 2025-11-25 09:48:16.933 253516 DEBUG oslo_concurrency.lockutils [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:48:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:17.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:17.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:17.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:17.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.018 253516 DEBUG nova.compute.resource_tracker [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.018 253516 DEBUG nova.compute.resource_tracker [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.054 253516 DEBUG nova.scheduler.client.report [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Refreshing inventories for resource provider d9873737-caae-40cc-9346-77a33537057c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.073 253516 DEBUG nova.scheduler.client.report [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Updating ProviderTree inventory for provider d9873737-caae-40cc-9346-77a33537057c from _refresh_and_get_inventory using data: {} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.073 253516 DEBUG nova.compute.provider_tree [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.084 253516 DEBUG nova.scheduler.client.report [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Refreshing aggregate associations for resource provider d9873737-caae-40cc-9346-77a33537057c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.096 253516 DEBUG nova.scheduler.client.report [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Refreshing trait associations for resource provider d9873737-caae-40cc-9346-77a33537057c, traits: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.113 253516 DEBUG oslo_concurrency.processutils [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:48:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:48:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:17.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:17 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900002670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:48:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712952052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.446 253516 DEBUG oslo_concurrency.processutils [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.450 253516 DEBUG nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 25 09:48:17 compute-0 nova_compute[253512]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.450 253516 INFO nova.virt.libvirt.host [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] kernel doesn't support AMD SEV
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.451 253516 DEBUG nova.compute.provider_tree [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Updating inventory in ProviderTree for provider d9873737-caae-40cc-9346-77a33537057c with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.451 253516 DEBUG nova.virt.libvirt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:48:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3156523680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3114636363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3534242528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/712952052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.527 253516 DEBUG nova.scheduler.client.report [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Updated inventory for provider d9873737-caae-40cc-9346-77a33537057c with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.528 253516 DEBUG nova.compute.provider_tree [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Updating resource provider d9873737-caae-40cc-9346-77a33537057c generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.528 253516 DEBUG nova.compute.provider_tree [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Updating inventory in ProviderTree for provider d9873737-caae-40cc-9346-77a33537057c with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.605 253516 DEBUG nova.compute.provider_tree [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Updating resource provider d9873737-caae-40cc-9346-77a33537057c generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.624 253516 DEBUG nova.compute.resource_tracker [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.624 253516 DEBUG oslo_concurrency.lockutils [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.624 253516 DEBUG nova.service [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.655 253516 DEBUG nova.service [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 25 09:48:17 compute-0 nova_compute[253512]: 2025-11-25 09:48:17.656 253516 DEBUG nova.servicegroup.drivers.db [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 25 09:48:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:17.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:18 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:18 compute-0 ceph-mon[74207]: pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:48:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1737525609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:48:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:18 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:48:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:19.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:19 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:19.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:48:20] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 25 09:48:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:48:20] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 25 09:48:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:20 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900002670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:20 compute-0 ceph-mon[74207]: pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:48:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:20 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:20 compute-0 sudo[253871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:48:20 compute-0 sudo[253871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:20 compute-0 sudo[253871]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:20 compute-0 podman[253895]: 2025-11-25 09:48:20.844126174 +0000 UTC m=+0.038555268 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:48:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:48:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:21.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:21 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:21.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:22 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:22 compute-0 ceph-mon[74207]: pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:48:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:22 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:48:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:23.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:23 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:23.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:24 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:24 compute-0 ceph-mon[74207]: pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:48:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:24 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:48:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:25 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:25.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:26 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:26 compute-0 ceph-mon[74207]: pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:48:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:26 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:26 compute-0 sudo[253919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:48:26 compute-0 sudo[253919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:26 compute-0 sudo[253919]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:26 compute-0 sudo[253944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:48:26 compute-0 sudo[253944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:27.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:27.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:27.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:27.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:27 compute-0 sudo[253944]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:48:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:48:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:48:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:48:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:48:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:48:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:48:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:48:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:48:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:48:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:48:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:48:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:48:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:48:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:48:27 compute-0 sudo[253998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:48:27 compute-0 sudo[253998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:27 compute-0 sudo[253998]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:27.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:27 compute-0 sudo[254023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:48:27 compute-0 sudo[254023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:27 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff910001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:27 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:48:27 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:48:27 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:48:27 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:48:27 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:48:27 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:48:27 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:48:27 compute-0 podman[254081]: 2025-11-25 09:48:27.707793018 +0000 UTC m=+0.026987817 container create 75d6e8093fc442eb9f3dfa2a1e775f3bc78d6ca67ab9380b2b45272c888ee427 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chandrasekhar, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:48:27 compute-0 systemd[1]: Started libpod-conmon-75d6e8093fc442eb9f3dfa2a1e775f3bc78d6ca67ab9380b2b45272c888ee427.scope.
Nov 25 09:48:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:48:27 compute-0 podman[254081]: 2025-11-25 09:48:27.762925891 +0000 UTC m=+0.082120701 container init 75d6e8093fc442eb9f3dfa2a1e775f3bc78d6ca67ab9380b2b45272c888ee427 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:48:27 compute-0 podman[254081]: 2025-11-25 09:48:27.767266476 +0000 UTC m=+0.086461276 container start 75d6e8093fc442eb9f3dfa2a1e775f3bc78d6ca67ab9380b2b45272c888ee427 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chandrasekhar, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:48:27 compute-0 podman[254081]: 2025-11-25 09:48:27.768406134 +0000 UTC m=+0.087600944 container attach 75d6e8093fc442eb9f3dfa2a1e775f3bc78d6ca67ab9380b2b45272c888ee427 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:48:27 compute-0 modest_chandrasekhar[254095]: 167 167
Nov 25 09:48:27 compute-0 systemd[1]: libpod-75d6e8093fc442eb9f3dfa2a1e775f3bc78d6ca67ab9380b2b45272c888ee427.scope: Deactivated successfully.
Nov 25 09:48:27 compute-0 podman[254081]: 2025-11-25 09:48:27.771300042 +0000 UTC m=+0.090494841 container died 75d6e8093fc442eb9f3dfa2a1e775f3bc78d6ca67ab9380b2b45272c888ee427 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f0e339cef8074b0f2f6bf7ac586bf6a510100109b1471764309d82b9ecc8941-merged.mount: Deactivated successfully.
Nov 25 09:48:27 compute-0 podman[254081]: 2025-11-25 09:48:27.787561051 +0000 UTC m=+0.106755851 container remove 75d6e8093fc442eb9f3dfa2a1e775f3bc78d6ca67ab9380b2b45272c888ee427 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 09:48:27 compute-0 podman[254081]: 2025-11-25 09:48:27.696484926 +0000 UTC m=+0.015679736 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:48:27 compute-0 systemd[1]: libpod-conmon-75d6e8093fc442eb9f3dfa2a1e775f3bc78d6ca67ab9380b2b45272c888ee427.scope: Deactivated successfully.
Nov 25 09:48:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:27.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:27 compute-0 podman[254117]: 2025-11-25 09:48:27.907402053 +0000 UTC m=+0.027982182 container create 9840dc797d55b995c9d8f95f4d200cdad6010afca1ffe1accc232b79161253a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:48:27 compute-0 systemd[1]: Started libpod-conmon-9840dc797d55b995c9d8f95f4d200cdad6010afca1ffe1accc232b79161253a5.scope.
Nov 25 09:48:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac6b4efe007e1373712f24cc26ef044175cd72c303779c00a3ea76b4829f40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac6b4efe007e1373712f24cc26ef044175cd72c303779c00a3ea76b4829f40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac6b4efe007e1373712f24cc26ef044175cd72c303779c00a3ea76b4829f40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac6b4efe007e1373712f24cc26ef044175cd72c303779c00a3ea76b4829f40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac6b4efe007e1373712f24cc26ef044175cd72c303779c00a3ea76b4829f40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:27 compute-0 podman[254117]: 2025-11-25 09:48:27.955941495 +0000 UTC m=+0.076521633 container init 9840dc797d55b995c9d8f95f4d200cdad6010afca1ffe1accc232b79161253a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:48:27 compute-0 podman[254117]: 2025-11-25 09:48:27.960575133 +0000 UTC m=+0.081155261 container start 9840dc797d55b995c9d8f95f4d200cdad6010afca1ffe1accc232b79161253a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:48:27 compute-0 podman[254117]: 2025-11-25 09:48:27.963057793 +0000 UTC m=+0.083637931 container attach 9840dc797d55b995c9d8f95f4d200cdad6010afca1ffe1accc232b79161253a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_blackburn, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 25 09:48:27 compute-0 podman[254117]: 2025-11-25 09:48:27.896503103 +0000 UTC m=+0.017083252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:48:28 compute-0 recursing_blackburn[254131]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:48:28 compute-0 recursing_blackburn[254131]: --> All data devices are unavailable
Nov 25 09:48:28 compute-0 systemd[1]: libpod-9840dc797d55b995c9d8f95f4d200cdad6010afca1ffe1accc232b79161253a5.scope: Deactivated successfully.
Nov 25 09:48:28 compute-0 podman[254146]: 2025-11-25 09:48:28.242789482 +0000 UTC m=+0.016235834 container died 9840dc797d55b995c9d8f95f4d200cdad6010afca1ffe1accc232b79161253a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:48:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ac6b4efe007e1373712f24cc26ef044175cd72c303779c00a3ea76b4829f40-merged.mount: Deactivated successfully.
Nov 25 09:48:28 compute-0 podman[254146]: 2025-11-25 09:48:28.263103324 +0000 UTC m=+0.036549676 container remove 9840dc797d55b995c9d8f95f4d200cdad6010afca1ffe1accc232b79161253a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_blackburn, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:48:28 compute-0 systemd[1]: libpod-conmon-9840dc797d55b995c9d8f95f4d200cdad6010afca1ffe1accc232b79161253a5.scope: Deactivated successfully.
Nov 25 09:48:28 compute-0 sudo[254023]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:28 compute-0 sudo[254158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:48:28 compute-0 sudo[254158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:28 compute-0 sudo[254158]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:28 compute-0 sudo[254183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:48:28 compute-0 sudo[254183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:28 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:28 compute-0 ceph-mon[74207]: pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:48:28 compute-0 podman[254239]: 2025-11-25 09:48:28.677210119 +0000 UTC m=+0.027044805 container create 5d234545806804252ad76935ebf6a4007e8d2eeb485d6df1622d75ce853b9164 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_turing, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:48:28 compute-0 systemd[1]: Started libpod-conmon-5d234545806804252ad76935ebf6a4007e8d2eeb485d6df1622d75ce853b9164.scope.
Nov 25 09:48:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:48:28 compute-0 podman[254239]: 2025-11-25 09:48:28.732015726 +0000 UTC m=+0.081850421 container init 5d234545806804252ad76935ebf6a4007e8d2eeb485d6df1622d75ce853b9164 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:48:28 compute-0 podman[254239]: 2025-11-25 09:48:28.73681768 +0000 UTC m=+0.086652356 container start 5d234545806804252ad76935ebf6a4007e8d2eeb485d6df1622d75ce853b9164 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_turing, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 25 09:48:28 compute-0 podman[254239]: 2025-11-25 09:48:28.738150953 +0000 UTC m=+0.087985629 container attach 5d234545806804252ad76935ebf6a4007e8d2eeb485d6df1622d75ce853b9164 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_turing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:48:28 compute-0 condescending_turing[254252]: 167 167
Nov 25 09:48:28 compute-0 systemd[1]: libpod-5d234545806804252ad76935ebf6a4007e8d2eeb485d6df1622d75ce853b9164.scope: Deactivated successfully.
Nov 25 09:48:28 compute-0 podman[254239]: 2025-11-25 09:48:28.740284336 +0000 UTC m=+0.090119013 container died 5d234545806804252ad76935ebf6a4007e8d2eeb485d6df1622d75ce853b9164 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_turing, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 09:48:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcf5a0012ed3322f781947dbec3d1a1287ebab3c04690487102ee24d17e014d8-merged.mount: Deactivated successfully.
Nov 25 09:48:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:28 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000ae00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:28 compute-0 podman[254239]: 2025-11-25 09:48:28.761544564 +0000 UTC m=+0.111379239 container remove 5d234545806804252ad76935ebf6a4007e8d2eeb485d6df1622d75ce853b9164 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:48:28 compute-0 podman[254239]: 2025-11-25 09:48:28.665961118 +0000 UTC m=+0.015795814 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:48:28 compute-0 systemd[1]: libpod-conmon-5d234545806804252ad76935ebf6a4007e8d2eeb485d6df1622d75ce853b9164.scope: Deactivated successfully.
Nov 25 09:48:28 compute-0 podman[254274]: 2025-11-25 09:48:28.880917132 +0000 UTC m=+0.028854126 container create 16c945a738fb21659719594da9eab012906a0249830a841b1929e41049e888e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kilby, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:48:28 compute-0 systemd[1]: Started libpod-conmon-16c945a738fb21659719594da9eab012906a0249830a841b1929e41049e888e1.scope.
Nov 25 09:48:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d9e6e1eb956beda422d125b4029547ca6996b6860666730d3a4fdefe1601ab1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d9e6e1eb956beda422d125b4029547ca6996b6860666730d3a4fdefe1601ab1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d9e6e1eb956beda422d125b4029547ca6996b6860666730d3a4fdefe1601ab1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d9e6e1eb956beda422d125b4029547ca6996b6860666730d3a4fdefe1601ab1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:28 compute-0 podman[254274]: 2025-11-25 09:48:28.944508265 +0000 UTC m=+0.092445258 container init 16c945a738fb21659719594da9eab012906a0249830a841b1929e41049e888e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kilby, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:48:28 compute-0 podman[254274]: 2025-11-25 09:48:28.949644198 +0000 UTC m=+0.097581192 container start 16c945a738fb21659719594da9eab012906a0249830a841b1929e41049e888e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kilby, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:48:28 compute-0 podman[254274]: 2025-11-25 09:48:28.95080721 +0000 UTC m=+0.098744214 container attach 16c945a738fb21659719594da9eab012906a0249830a841b1929e41049e888e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kilby, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:48:28 compute-0 podman[254274]: 2025-11-25 09:48:28.87012384 +0000 UTC m=+0.018060855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:48:29 compute-0 elastic_kilby[254287]: {
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:     "1": [
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:         {
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             "devices": [
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "/dev/loop3"
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             ],
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             "lv_name": "ceph_lv0",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             "lv_size": "21470642176",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             "name": "ceph_lv0",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             "tags": {
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.cluster_name": "ceph",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.crush_device_class": "",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.encrypted": "0",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.osd_id": "1",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.type": "block",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.vdo": "0",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:                 "ceph.with_tpm": "0"
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             },
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             "type": "block",
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:             "vg_name": "ceph_vg0"
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:         }
Nov 25 09:48:29 compute-0 elastic_kilby[254287]:     ]
Nov 25 09:48:29 compute-0 elastic_kilby[254287]: }
Nov 25 09:48:29 compute-0 systemd[1]: libpod-16c945a738fb21659719594da9eab012906a0249830a841b1929e41049e888e1.scope: Deactivated successfully.
Nov 25 09:48:29 compute-0 podman[254274]: 2025-11-25 09:48:29.180223709 +0000 UTC m=+0.328160703 container died 16c945a738fb21659719594da9eab012906a0249830a841b1929e41049e888e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kilby, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d9e6e1eb956beda422d125b4029547ca6996b6860666730d3a4fdefe1601ab1-merged.mount: Deactivated successfully.
Nov 25 09:48:29 compute-0 podman[254274]: 2025-11-25 09:48:29.201633908 +0000 UTC m=+0.349570892 container remove 16c945a738fb21659719594da9eab012906a0249830a841b1929e41049e888e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kilby, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Nov 25 09:48:29 compute-0 systemd[1]: libpod-conmon-16c945a738fb21659719594da9eab012906a0249830a841b1929e41049e888e1.scope: Deactivated successfully.
Nov 25 09:48:29 compute-0 sudo[254183]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:29 compute-0 sudo[254305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:48:29 compute-0 sudo[254305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:29 compute-0 sudo[254305]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:29 compute-0 sudo[254330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:48:29 compute-0 sudo[254330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:29.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:29 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000ae00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:29 compute-0 podman[254387]: 2025-11-25 09:48:29.612872275 +0000 UTC m=+0.026914117 container create 82be6bb7ad783d06755665f199ced5d92a4836d0c695cbd1f7d8aea4c84c6145 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 09:48:29 compute-0 systemd[1]: Started libpod-conmon-82be6bb7ad783d06755665f199ced5d92a4836d0c695cbd1f7d8aea4c84c6145.scope.
Nov 25 09:48:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:48:29 compute-0 podman[254387]: 2025-11-25 09:48:29.676589375 +0000 UTC m=+0.090631228 container init 82be6bb7ad783d06755665f199ced5d92a4836d0c695cbd1f7d8aea4c84c6145 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:48:29 compute-0 podman[254387]: 2025-11-25 09:48:29.681856266 +0000 UTC m=+0.095898108 container start 82be6bb7ad783d06755665f199ced5d92a4836d0c695cbd1f7d8aea4c84c6145 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 25 09:48:29 compute-0 podman[254387]: 2025-11-25 09:48:29.682997157 +0000 UTC m=+0.097039010 container attach 82be6bb7ad783d06755665f199ced5d92a4836d0c695cbd1f7d8aea4c84c6145 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:48:29 compute-0 eager_heyrovsky[254399]: 167 167
Nov 25 09:48:29 compute-0 systemd[1]: libpod-82be6bb7ad783d06755665f199ced5d92a4836d0c695cbd1f7d8aea4c84c6145.scope: Deactivated successfully.
Nov 25 09:48:29 compute-0 conmon[254399]: conmon 82be6bb7ad783d067556 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82be6bb7ad783d06755665f199ced5d92a4836d0c695cbd1f7d8aea4c84c6145.scope/container/memory.events
Nov 25 09:48:29 compute-0 podman[254387]: 2025-11-25 09:48:29.685756528 +0000 UTC m=+0.099798372 container died 82be6bb7ad783d06755665f199ced5d92a4836d0c695cbd1f7d8aea4c84c6145 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-125ab2dee5a3be26d468691fe41fc8effd5013acb0c790a457e1346bdd683c66-merged.mount: Deactivated successfully.
Nov 25 09:48:29 compute-0 podman[254387]: 2025-11-25 09:48:29.602269232 +0000 UTC m=+0.016311096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:48:29 compute-0 podman[254387]: 2025-11-25 09:48:29.702355587 +0000 UTC m=+0.116397430 container remove 82be6bb7ad783d06755665f199ced5d92a4836d0c695cbd1f7d8aea4c84c6145 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 09:48:29 compute-0 systemd[1]: libpod-conmon-82be6bb7ad783d06755665f199ced5d92a4836d0c695cbd1f7d8aea4c84c6145.scope: Deactivated successfully.
Nov 25 09:48:29 compute-0 podman[254422]: 2025-11-25 09:48:29.827188572 +0000 UTC m=+0.030611569 container create 5f32470b5db9a49de19b0724692240dd097d791995e87d9fe926539e0849f463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 09:48:29 compute-0 systemd[1]: Started libpod-conmon-5f32470b5db9a49de19b0724692240dd097d791995e87d9fe926539e0849f463.scope.
Nov 25 09:48:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:29.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb8390d79b009ad4585608113469f18a94472b44b34e180e68284bed8098efb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb8390d79b009ad4585608113469f18a94472b44b34e180e68284bed8098efb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb8390d79b009ad4585608113469f18a94472b44b34e180e68284bed8098efb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb8390d79b009ad4585608113469f18a94472b44b34e180e68284bed8098efb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:48:29 compute-0 podman[254422]: 2025-11-25 09:48:29.888444511 +0000 UTC m=+0.091867508 container init 5f32470b5db9a49de19b0724692240dd097d791995e87d9fe926539e0849f463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_shannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 09:48:29 compute-0 podman[254422]: 2025-11-25 09:48:29.893121469 +0000 UTC m=+0.096544467 container start 5f32470b5db9a49de19b0724692240dd097d791995e87d9fe926539e0849f463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_shannon, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:48:29 compute-0 podman[254422]: 2025-11-25 09:48:29.894297997 +0000 UTC m=+0.097721015 container attach 5f32470b5db9a49de19b0724692240dd097d791995e87d9fe926539e0849f463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 25 09:48:29 compute-0 podman[254422]: 2025-11-25 09:48:29.815012032 +0000 UTC m=+0.018435050 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:48:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:48:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:48:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:48:30] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:48:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:48:30] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:48:30 compute-0 dazzling_shannon[254435]: {}
Nov 25 09:48:30 compute-0 lvm[254513]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:48:30 compute-0 lvm[254513]: VG ceph_vg0 finished
Nov 25 09:48:30 compute-0 systemd[1]: libpod-5f32470b5db9a49de19b0724692240dd097d791995e87d9fe926539e0849f463.scope: Deactivated successfully.
Nov 25 09:48:30 compute-0 podman[254422]: 2025-11-25 09:48:30.375735315 +0000 UTC m=+0.579158313 container died 5f32470b5db9a49de19b0724692240dd097d791995e87d9fe926539e0849f463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_shannon, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fb8390d79b009ad4585608113469f18a94472b44b34e180e68284bed8098efb-merged.mount: Deactivated successfully.
Nov 25 09:48:30 compute-0 podman[254422]: 2025-11-25 09:48:30.397350972 +0000 UTC m=+0.600773970 container remove 5f32470b5db9a49de19b0724692240dd097d791995e87d9fe926539e0849f463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_shannon, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:48:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:30 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000ae00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:30 compute-0 lvm[254516]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:48:30 compute-0 lvm[254516]: VG ceph_vg0 finished
Nov 25 09:48:30 compute-0 systemd[1]: libpod-conmon-5f32470b5db9a49de19b0724692240dd097d791995e87d9fe926539e0849f463.scope: Deactivated successfully.
Nov 25 09:48:30 compute-0 sudo[254330]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:48:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:48:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:48:30 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:48:30 compute-0 sudo[254525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:48:30 compute-0 sudo[254525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:30 compute-0 sudo[254525]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:30 compute-0 ceph-mon[74207]: pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:48:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:48:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:48:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:30 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:48:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:31 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:31.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:32 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000bc10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:32 compute-0 ceph-mon[74207]: pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:48:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:32 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000bc10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:33.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:33 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:33.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:34 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:34 compute-0 ceph-mon[74207]: pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:34 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000bc10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:35.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:35 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000bc10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:35.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:36 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:36 compute-0 ceph-mon[74207]: pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:36 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:37.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:37.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:37.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:37.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:48:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:37.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:37 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:37.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:38 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 09:48:38 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2669008744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:48:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 09:48:38 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2669008744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:48:38 compute-0 ceph-mon[74207]: pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:48:38 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3835819779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:48:38 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3835819779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:48:38 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/2669008744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:48:38 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/2669008744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:48:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 09:48:38 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1190347804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:48:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 09:48:38 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1190347804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:48:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:38 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:38 compute-0 podman[254558]: 2025-11-25 09:48:38.9766106 +0000 UTC m=+0.040781989 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 09:48:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:39.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:39 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:39 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1190347804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:48:39 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1190347804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:48:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:48:40] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:48:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:48:40] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:48:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:40 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:40 compute-0 ceph-mon[74207]: pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:40 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:40 compute-0 sudo[254576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:48:40 compute-0 sudo[254576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:48:40 compute-0 sudo[254576]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:48:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:41.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:41 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:41 compute-0 nova_compute[253512]: 2025-11-25 09:48:41.657 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:48:41 compute-0 nova_compute[253512]: 2025-11-25 09:48:41.684 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:48:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:41.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:42 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:42 compute-0 ceph-mon[74207]: pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:48:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:42 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:43.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:43 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:43.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:44 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:44 compute-0 ceph-mon[74207]: pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:44 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:48:44
Nov 25 09:48:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:48:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:48:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.log', 'backups', 'images', 'vms', '.nfs', 'cephfs.cephfs.meta']
Nov 25 09:48:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:48:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:48:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:48:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:48:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:48:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:48:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:48:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:48:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:48:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:48:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:48:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:48:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:48:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:48:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:48:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:48:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:48:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:48:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:48:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:45.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:45 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:48:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:45.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:46 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:46 compute-0 ceph-mon[74207]: pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:46 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:46 compute-0 podman[254608]: 2025-11-25 09:48:46.98941561 +0000 UTC m=+0.052786468 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 09:48:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094846 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:48:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:47.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:47.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:47.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:47.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:48:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:47.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:47 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:47.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:48 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:48 compute-0 ceph-mon[74207]: pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:48:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:48 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:49.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:49 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:49.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:48:50] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:48:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:48:50] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Nov 25 09:48:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:50 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:50 compute-0 ceph-mon[74207]: pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:48:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:50 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:50 compute-0 podman[254635]: 2025-11-25 09:48:50.978472362 +0000 UTC m=+0.038409866 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 09:48:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:48:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:51.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:51 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:51.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:52 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:52 compute-0 ceph-mon[74207]: pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:48:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:52 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:53.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:53 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000caa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:53.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:54 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:54 compute-0 ceph-mon[74207]: pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:54 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:48:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:48:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:48:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:55.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:48:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:55 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:55.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:56 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:56 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:48:56 compute-0 ceph-mon[74207]: pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:48:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:56 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:57.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:57.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:57.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:48:57.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:48:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:48:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:57.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:57 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:48:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:57.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:58 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:58 compute-0 ceph-mon[74207]: pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:48:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:58 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:48:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:48:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:59.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:48:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:59 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:48:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:59 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:48:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:48:59 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:48:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:48:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:48:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:59.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:48:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:48:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:49:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:49:00] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 25 09:49:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:49:00] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 25 09:49:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:00 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:00 compute-0 ceph-mon[74207]: pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:49:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:49:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:00 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:00 compute-0 sudo[254664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:49:00 compute-0 sudo[254664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:00 compute-0 sudo[254664]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:49:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:01.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:01 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0045b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:01.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:02 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:02 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:49:02 compute-0 ceph-mon[74207]: pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:49:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:02 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:49:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:03.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:03 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:03.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:04 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:04 compute-0 ceph-mon[74207]: pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:49:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:04 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:49:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:49:05.377 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:49:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:49:05.377 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:49:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:49:05.378 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:49:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:05 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:05.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:05.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:06 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:06 compute-0 ceph-mon[74207]: pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:49:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:06 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:07.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:07.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:07.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:07.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:49:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:07 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:07.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.839786) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147839806, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 722, "num_deletes": 250, "total_data_size": 1045792, "memory_usage": 1058440, "flush_reason": "Manual Compaction"}
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147842569, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 693820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17336, "largest_seqno": 18057, "table_properties": {"data_size": 690614, "index_size": 1050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8478, "raw_average_key_size": 20, "raw_value_size": 683831, "raw_average_value_size": 1628, "num_data_blocks": 46, "num_entries": 420, "num_filter_entries": 420, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064095, "oldest_key_time": 1764064095, "file_creation_time": 1764064147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 2806 microseconds, and 1893 cpu microseconds.
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.842592) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 693820 bytes OK
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.842602) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.843184) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.843195) EVENT_LOG_v1 {"time_micros": 1764064147843191, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.843206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1042132, prev total WAL file size 1042132, number of live WAL files 2.
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.843556) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(677KB)], [35(14MB)]
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147843580, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 15829906, "oldest_snapshot_seqno": -1}
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4971 keys, 12085359 bytes, temperature: kUnknown
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147864941, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12085359, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12051347, "index_size": 20462, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 124807, "raw_average_key_size": 25, "raw_value_size": 11960540, "raw_average_value_size": 2406, "num_data_blocks": 856, "num_entries": 4971, "num_filter_entries": 4971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764064147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.865162) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12085359 bytes
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.871658) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 736.4 rd, 562.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 14.4 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(40.2) write-amplify(17.4) OK, records in: 5464, records dropped: 493 output_compression: NoCompression
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.871671) EVENT_LOG_v1 {"time_micros": 1764064147871665, "job": 16, "event": "compaction_finished", "compaction_time_micros": 21495, "compaction_time_cpu_micros": 17804, "output_level": 6, "num_output_files": 1, "total_output_size": 12085359, "num_input_records": 5464, "num_output_records": 4971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147872140, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147874072, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.843508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.874192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.874197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.874198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.874200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:49:07 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:49:07.874201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:49:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:07.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:08 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:08 compute-0 ceph-mon[74207]: pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:49:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:08 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094909 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:49:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:49:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:09 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240053e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:09.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:09.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:09 compute-0 podman[254699]: 2025-11-25 09:49:09.973455184 +0000 UTC m=+0.035035616 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:49:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:49:10] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 25 09:49:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:49:10] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 25 09:49:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:10 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:10 compute-0 ceph-mon[74207]: pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:49:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:10 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0056b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:49:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:11 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:11.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:11.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:12 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:12 compute-0 ceph-mon[74207]: pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:49:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:12 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:49:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:13 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:13.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:13.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:14 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:14 compute-0 ceph-mon[74207]: pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:49:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:14 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240053e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:49:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:49:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:49:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:49:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:49:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:49:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:49:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:49:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:49:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:15 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:15.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.503 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.503 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.503 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.503 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.504 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.504 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.504 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.504 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.504 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.549 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.549 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.550 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.550 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.550 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:49:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:49:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2953543723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:49:15 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4116950350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:15 compute-0 nova_compute[253512]: 2025-11-25 09:49:15.888 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:49:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:15.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:16 compute-0 nova_compute[253512]: 2025-11-25 09:49:16.080 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:49:16 compute-0 nova_compute[253512]: 2025-11-25 09:49:16.081 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4966MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:49:16 compute-0 nova_compute[253512]: 2025-11-25 09:49:16.081 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:49:16 compute-0 nova_compute[253512]: 2025-11-25 09:49:16.082 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:49:16 compute-0 nova_compute[253512]: 2025-11-25 09:49:16.223 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:49:16 compute-0 nova_compute[253512]: 2025-11-25 09:49:16.223 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:49:16 compute-0 nova_compute[253512]: 2025-11-25 09:49:16.243 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:49:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:16 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:49:16 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1721534697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-0 nova_compute[253512]: 2025-11-25 09:49:16.572 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:49:16 compute-0 nova_compute[253512]: 2025-11-25 09:49:16.575 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:49:16 compute-0 nova_compute[253512]: 2025-11-25 09:49:16.599 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:49:16 compute-0 nova_compute[253512]: 2025-11-25 09:49:16.600 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:49:16 compute-0 nova_compute[253512]: 2025-11-25 09:49:16.600 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:49:16 compute-0 ceph-mon[74207]: pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:49:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3660985114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4116950350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2435061998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3521299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1721534697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:49:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:16 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:17.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 6 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:17.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:17.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:17.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:49:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:17 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:17.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:17.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:17 compute-0 podman[254767]: 2025-11-25 09:49:17.991539696 +0000 UTC m=+0.056619395 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:49:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:18 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900002190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:18 compute-0 ceph-mon[74207]: pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:49:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:18 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900002190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:19 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000cff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:19.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:19.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Nov 25 09:49:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2422871279' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 09:49:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.15132 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 09:49:20 compute-0 ceph-mgr[74476]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 09:49:20 compute-0 ceph-mgr[74476]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 09:49:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.15132 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 25 09:49:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:49:20] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:49:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:49:20] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:49:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.24751 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 09:49:20 compute-0 ceph-mgr[74476]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 09:49:20 compute-0 ceph-mgr[74476]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 09:49:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:20 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:20 compute-0 ceph-mon[74207]: pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/2422871279' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 09:49:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/2201242462' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 09:49:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:20 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900002190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:20 compute-0 sudo[254793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:49:20 compute-0 sudo[254793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:20 compute-0 sudo[254793]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:21 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:21.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:21 compute-0 ceph-mon[74207]: from='client.15132 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 09:49:21 compute-0 ceph-mon[74207]: from='client.15132 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 25 09:49:21 compute-0 ceph-mon[74207]: from='client.24751 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 09:49:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:21.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:21 compute-0 podman[254820]: 2025-11-25 09:49:21.979394513 +0000 UTC m=+0.042379720 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 09:49:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:22 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:22 compute-0 ceph-mon[74207]: pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:22 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:23 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900002190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:23.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:23.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:24 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900002190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:24 compute-0 ceph-mon[74207]: pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:24 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:25 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:25.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:25.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:26 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:26 compute-0 ceph-mon[74207]: pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:26 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900003d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:27.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:27.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:27.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:27.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:49:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:27 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:27.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:27.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:28 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:28 compute-0 ceph-mon[74207]: pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:49:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:28 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:29 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:29.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:29.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:49:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:49:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:49:30] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:49:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:49:30] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:49:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:30 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:30 compute-0 sudo[254847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:49:30 compute-0 sudo[254847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:30 compute-0 sudo[254847]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:30 compute-0 sudo[254872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 09:49:30 compute-0 sudo[254872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:30 compute-0 ceph-mon[74207]: pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:49:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:30 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:31 compute-0 podman[254954]: 2025-11-25 09:49:31.096043053 +0000 UTC m=+0.040638538 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:49:31 compute-0 podman[254954]: 2025-11-25 09:49:31.175146599 +0000 UTC m=+0.119742085 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 09:49:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:31 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:31.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:31 compute-0 podman[255066]: 2025-11-25 09:49:31.536664011 +0000 UTC m=+0.034542927 container exec e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:49:31 compute-0 podman[255066]: 2025-11-25 09:49:31.548120558 +0000 UTC m=+0.045999453 container exec_died e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:49:31 compute-0 podman[255137]: 2025-11-25 09:49:31.735275922 +0000 UTC m=+0.033407457 container exec 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:49:31 compute-0 podman[255137]: 2025-11-25 09:49:31.756052568 +0000 UTC m=+0.054184093 container exec_died 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:49:31 compute-0 podman[255194]: 2025-11-25 09:49:31.888679781 +0000 UTC m=+0.032426788 container exec c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:49:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:31.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:32 compute-0 podman[255194]: 2025-11-25 09:49:32.019134509 +0000 UTC m=+0.162881516 container exec_died c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 09:49:32 compute-0 podman[255254]: 2025-11-25 09:49:32.152318983 +0000 UTC m=+0.032459800 container exec e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:49:32 compute-0 podman[255254]: 2025-11-25 09:49:32.164700343 +0000 UTC m=+0.044841140 container exec_died e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 09:49:32 compute-0 podman[255306]: 2025-11-25 09:49:32.299326115 +0000 UTC m=+0.033556207 container exec 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, name=keepalived, version=2.2.4, distribution-scope=public, release=1793, description=keepalived for Ceph, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Nov 25 09:49:32 compute-0 podman[255306]: 2025-11-25 09:49:32.307145644 +0000 UTC m=+0.041375738 container exec_died 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, architecture=x86_64, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.buildah.version=1.28.2, version=2.2.4)
Nov 25 09:49:32 compute-0 podman[255356]: 2025-11-25 09:49:32.444487517 +0000 UTC m=+0.032330247 container exec 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:49:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:32 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:32 compute-0 podman[255356]: 2025-11-25 09:49:32.466592837 +0000 UTC m=+0.054435558 container exec_died 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 09:49:32 compute-0 podman[255404]: 2025-11-25 09:49:32.568241332 +0000 UTC m=+0.033069199 container exec bf08aebaf45a5f98995f2aa0990acb39be3ad8282b92f73ae3cb21c6130e2d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:49:32 compute-0 podman[255404]: 2025-11-25 09:49:32.579068242 +0000 UTC m=+0.043896109 container exec_died bf08aebaf45a5f98995f2aa0990acb39be3ad8282b92f73ae3cb21c6130e2d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:49:32 compute-0 sudo[254872]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:49:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:49:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:32 compute-0 ceph-mon[74207]: pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:32 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:32 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:32 compute-0 sudo[255460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:49:32 compute-0 sudo[255460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:32 compute-0 sudo[255460]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:32 compute-0 sudo[255485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:49:32 compute-0 sudo[255485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:32 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:33 compute-0 sudo[255485]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:49:33 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:49:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:49:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:49:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:49:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:49:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:49:33 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:49:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:49:33 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:49:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:49:33 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:49:33 compute-0 sudo[255539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:49:33 compute-0 sudo[255539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:33 compute-0 sudo[255539]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:33 compute-0 sudo[255564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:49:33 compute-0 sudo[255564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:33 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:33.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:33 compute-0 podman[255620]: 2025-11-25 09:49:33.630767324 +0000 UTC m=+0.027244069 container create 25e31db03697f1f9d32b703b9f33c70606f3233c0625dd0aa37442b2bf184d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_jackson, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:49:33 compute-0 systemd[1]: Started libpod-conmon-25e31db03697f1f9d32b703b9f33c70606f3233c0625dd0aa37442b2bf184d17.scope.
Nov 25 09:49:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:49:33 compute-0 podman[255620]: 2025-11-25 09:49:33.688203647 +0000 UTC m=+0.084680381 container init 25e31db03697f1f9d32b703b9f33c70606f3233c0625dd0aa37442b2bf184d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_jackson, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:49:33 compute-0 podman[255620]: 2025-11-25 09:49:33.692948892 +0000 UTC m=+0.089425626 container start 25e31db03697f1f9d32b703b9f33c70606f3233c0625dd0aa37442b2bf184d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 25 09:49:33 compute-0 podman[255620]: 2025-11-25 09:49:33.694256667 +0000 UTC m=+0.090733401 container attach 25e31db03697f1f9d32b703b9f33c70606f3233c0625dd0aa37442b2bf184d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:49:33 compute-0 vigilant_jackson[255633]: 167 167
Nov 25 09:49:33 compute-0 systemd[1]: libpod-25e31db03697f1f9d32b703b9f33c70606f3233c0625dd0aa37442b2bf184d17.scope: Deactivated successfully.
Nov 25 09:49:33 compute-0 podman[255620]: 2025-11-25 09:49:33.696463457 +0000 UTC m=+0.092940191 container died 25e31db03697f1f9d32b703b9f33c70606f3233c0625dd0aa37442b2bf184d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fefb4804c733afa90bf1749de9e72a78ba5e46d38dd28ca4d4bfb49754dd6d2-merged.mount: Deactivated successfully.
Nov 25 09:49:33 compute-0 podman[255620]: 2025-11-25 09:49:33.619764223 +0000 UTC m=+0.016240977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:49:33 compute-0 podman[255620]: 2025-11-25 09:49:33.716487174 +0000 UTC m=+0.112963908 container remove 25e31db03697f1f9d32b703b9f33c70606f3233c0625dd0aa37442b2bf184d17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:49:33 compute-0 systemd[1]: libpod-conmon-25e31db03697f1f9d32b703b9f33c70606f3233c0625dd0aa37442b2bf184d17.scope: Deactivated successfully.
Nov 25 09:49:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:49:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:49:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:49:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:49:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:49:33 compute-0 podman[255656]: 2025-11-25 09:49:33.835054843 +0000 UTC m=+0.027796790 container create 7c946a5857c995d970d82cb63c75734161a9a17f8aa5dc621636ce3841b5ae34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 09:49:33 compute-0 systemd[1]: Started libpod-conmon-7c946a5857c995d970d82cb63c75734161a9a17f8aa5dc621636ce3841b5ae34.scope.
Nov 25 09:49:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34dc5a864233f29d55e7c8828966366641fd351df8f987c13e16a7ddaba603b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34dc5a864233f29d55e7c8828966366641fd351df8f987c13e16a7ddaba603b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34dc5a864233f29d55e7c8828966366641fd351df8f987c13e16a7ddaba603b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34dc5a864233f29d55e7c8828966366641fd351df8f987c13e16a7ddaba603b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34dc5a864233f29d55e7c8828966366641fd351df8f987c13e16a7ddaba603b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:33 compute-0 podman[255656]: 2025-11-25 09:49:33.891517961 +0000 UTC m=+0.084259929 container init 7c946a5857c995d970d82cb63c75734161a9a17f8aa5dc621636ce3841b5ae34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:49:33 compute-0 podman[255656]: 2025-11-25 09:49:33.898136708 +0000 UTC m=+0.090878655 container start 7c946a5857c995d970d82cb63c75734161a9a17f8aa5dc621636ce3841b5ae34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_northcutt, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:49:33 compute-0 podman[255656]: 2025-11-25 09:49:33.899314689 +0000 UTC m=+0.092056636 container attach 7c946a5857c995d970d82cb63c75734161a9a17f8aa5dc621636ce3841b5ae34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:49:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:33.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:33 compute-0 podman[255656]: 2025-11-25 09:49:33.823623254 +0000 UTC m=+0.016365222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:49:34 compute-0 confident_northcutt[255669]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:49:34 compute-0 confident_northcutt[255669]: --> All data devices are unavailable
Nov 25 09:49:34 compute-0 systemd[1]: libpod-7c946a5857c995d970d82cb63c75734161a9a17f8aa5dc621636ce3841b5ae34.scope: Deactivated successfully.
Nov 25 09:49:34 compute-0 podman[255656]: 2025-11-25 09:49:34.157599058 +0000 UTC m=+0.350340995 container died 7c946a5857c995d970d82cb63c75734161a9a17f8aa5dc621636ce3841b5ae34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_northcutt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f34dc5a864233f29d55e7c8828966366641fd351df8f987c13e16a7ddaba603b-merged.mount: Deactivated successfully.
Nov 25 09:49:34 compute-0 podman[255656]: 2025-11-25 09:49:34.180387576 +0000 UTC m=+0.373129523 container remove 7c946a5857c995d970d82cb63c75734161a9a17f8aa5dc621636ce3841b5ae34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_northcutt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:49:34 compute-0 systemd[1]: libpod-conmon-7c946a5857c995d970d82cb63c75734161a9a17f8aa5dc621636ce3841b5ae34.scope: Deactivated successfully.
Nov 25 09:49:34 compute-0 sudo[255564]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:34 compute-0 sudo[255696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:49:34 compute-0 sudo[255696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:34 compute-0 sudo[255696]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:34 compute-0 sudo[255721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:49:34 compute-0 sudo[255721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:34 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:34 compute-0 podman[255779]: 2025-11-25 09:49:34.585826833 +0000 UTC m=+0.024614340 container create d7ff40600fcb51d6afa99ae788455d837a1b98f6657f09bbf31b91015eff9ecf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:49:34 compute-0 systemd[1]: Started libpod-conmon-d7ff40600fcb51d6afa99ae788455d837a1b98f6657f09bbf31b91015eff9ecf.scope.
Nov 25 09:49:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:49:34 compute-0 podman[255779]: 2025-11-25 09:49:34.631364916 +0000 UTC m=+0.070152433 container init d7ff40600fcb51d6afa99ae788455d837a1b98f6657f09bbf31b91015eff9ecf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_goldwasser, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:49:34 compute-0 podman[255779]: 2025-11-25 09:49:34.635429028 +0000 UTC m=+0.074216524 container start d7ff40600fcb51d6afa99ae788455d837a1b98f6657f09bbf31b91015eff9ecf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:49:34 compute-0 podman[255779]: 2025-11-25 09:49:34.636456534 +0000 UTC m=+0.075244032 container attach d7ff40600fcb51d6afa99ae788455d837a1b98f6657f09bbf31b91015eff9ecf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:49:34 compute-0 pensive_goldwasser[255792]: 167 167
Nov 25 09:49:34 compute-0 systemd[1]: libpod-d7ff40600fcb51d6afa99ae788455d837a1b98f6657f09bbf31b91015eff9ecf.scope: Deactivated successfully.
Nov 25 09:49:34 compute-0 conmon[255792]: conmon d7ff40600fcb51d6afa9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7ff40600fcb51d6afa99ae788455d837a1b98f6657f09bbf31b91015eff9ecf.scope/container/memory.events
Nov 25 09:49:34 compute-0 podman[255779]: 2025-11-25 09:49:34.639184558 +0000 UTC m=+0.077972055 container died d7ff40600fcb51d6afa99ae788455d837a1b98f6657f09bbf31b91015eff9ecf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_goldwasser, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 09:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee0a15226b450c8dfdbc77469c4902dfed208fa0e7f48165b61f9e91927864d7-merged.mount: Deactivated successfully.
Nov 25 09:49:34 compute-0 podman[255779]: 2025-11-25 09:49:34.655279997 +0000 UTC m=+0.094067495 container remove d7ff40600fcb51d6afa99ae788455d837a1b98f6657f09bbf31b91015eff9ecf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 09:49:34 compute-0 podman[255779]: 2025-11-25 09:49:34.576375476 +0000 UTC m=+0.015162983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:49:34 compute-0 systemd[1]: libpod-conmon-d7ff40600fcb51d6afa99ae788455d837a1b98f6657f09bbf31b91015eff9ecf.scope: Deactivated successfully.
Nov 25 09:49:34 compute-0 podman[255815]: 2025-11-25 09:49:34.771151065 +0000 UTC m=+0.027903042 container create 1ddd66b8af9c21f8d35151506d73cb25bde3c0c6499115294e32285e4f5da195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_faraday, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:49:34 compute-0 ceph-mon[74207]: pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:34 compute-0 systemd[1]: Started libpod-conmon-1ddd66b8af9c21f8d35151506d73cb25bde3c0c6499115294e32285e4f5da195.scope.
Nov 25 09:49:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13a02919f5fbe70c428eb81b8fff6572683d07aa568200fca038ace72c3883d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13a02919f5fbe70c428eb81b8fff6572683d07aa568200fca038ace72c3883d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13a02919f5fbe70c428eb81b8fff6572683d07aa568200fca038ace72c3883d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13a02919f5fbe70c428eb81b8fff6572683d07aa568200fca038ace72c3883d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:34 compute-0 podman[255815]: 2025-11-25 09:49:34.827857321 +0000 UTC m=+0.084609307 container init 1ddd66b8af9c21f8d35151506d73cb25bde3c0c6499115294e32285e4f5da195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:49:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:34 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:34 compute-0 podman[255815]: 2025-11-25 09:49:34.833952401 +0000 UTC m=+0.090704378 container start 1ddd66b8af9c21f8d35151506d73cb25bde3c0c6499115294e32285e4f5da195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:49:34 compute-0 podman[255815]: 2025-11-25 09:49:34.835052275 +0000 UTC m=+0.091804270 container attach 1ddd66b8af9c21f8d35151506d73cb25bde3c0c6499115294e32285e4f5da195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_faraday, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:49:34 compute-0 podman[255815]: 2025-11-25 09:49:34.758323995 +0000 UTC m=+0.015075991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:49:35 compute-0 youthful_faraday[255829]: {
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:     "1": [
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:         {
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             "devices": [
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "/dev/loop3"
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             ],
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             "lv_name": "ceph_lv0",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             "lv_size": "21470642176",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             "name": "ceph_lv0",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             "tags": {
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.cluster_name": "ceph",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.crush_device_class": "",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.encrypted": "0",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.osd_id": "1",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.type": "block",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.vdo": "0",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:                 "ceph.with_tpm": "0"
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             },
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             "type": "block",
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:             "vg_name": "ceph_vg0"
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:         }
Nov 25 09:49:35 compute-0 youthful_faraday[255829]:     ]
Nov 25 09:49:35 compute-0 youthful_faraday[255829]: }
Nov 25 09:49:35 compute-0 systemd[1]: libpod-1ddd66b8af9c21f8d35151506d73cb25bde3c0c6499115294e32285e4f5da195.scope: Deactivated successfully.
Nov 25 09:49:35 compute-0 podman[255838]: 2025-11-25 09:49:35.097005829 +0000 UTC m=+0.016075095 container died 1ddd66b8af9c21f8d35151506d73cb25bde3c0c6499115294e32285e4f5da195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_faraday, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f13a02919f5fbe70c428eb81b8fff6572683d07aa568200fca038ace72c3883d-merged.mount: Deactivated successfully.
Nov 25 09:49:35 compute-0 podman[255838]: 2025-11-25 09:49:35.116086478 +0000 UTC m=+0.035155744 container remove 1ddd66b8af9c21f8d35151506d73cb25bde3c0c6499115294e32285e4f5da195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:49:35 compute-0 systemd[1]: libpod-conmon-1ddd66b8af9c21f8d35151506d73cb25bde3c0c6499115294e32285e4f5da195.scope: Deactivated successfully.
Nov 25 09:49:35 compute-0 sudo[255721]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:35 compute-0 sudo[255849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:49:35 compute-0 sudo[255849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:35 compute-0 sudo[255849]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:35 compute-0 sudo[255874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:49:35 compute-0 sudo[255874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:35 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:35 compute-0 podman[255930]: 2025-11-25 09:49:35.498708744 +0000 UTC m=+0.027099886 container create 5a0a24a37c1420e2212d7558979f1904e61264c2a8405b7d14716dab8d54a74c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 09:49:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:35.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:35 compute-0 systemd[1]: Started libpod-conmon-5a0a24a37c1420e2212d7558979f1904e61264c2a8405b7d14716dab8d54a74c.scope.
Nov 25 09:49:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:49:35 compute-0 podman[255930]: 2025-11-25 09:49:35.549189124 +0000 UTC m=+0.077580267 container init 5a0a24a37c1420e2212d7558979f1904e61264c2a8405b7d14716dab8d54a74c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bartik, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:49:35 compute-0 podman[255930]: 2025-11-25 09:49:35.552967666 +0000 UTC m=+0.081358808 container start 5a0a24a37c1420e2212d7558979f1904e61264c2a8405b7d14716dab8d54a74c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:49:35 compute-0 podman[255930]: 2025-11-25 09:49:35.554144674 +0000 UTC m=+0.082535818 container attach 5a0a24a37c1420e2212d7558979f1904e61264c2a8405b7d14716dab8d54a74c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:49:35 compute-0 focused_bartik[255943]: 167 167
Nov 25 09:49:35 compute-0 systemd[1]: libpod-5a0a24a37c1420e2212d7558979f1904e61264c2a8405b7d14716dab8d54a74c.scope: Deactivated successfully.
Nov 25 09:49:35 compute-0 podman[255930]: 2025-11-25 09:49:35.557078746 +0000 UTC m=+0.085469890 container died 5a0a24a37c1420e2212d7558979f1904e61264c2a8405b7d14716dab8d54a74c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bartik, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ee7d39d04406ab60d41060839b76826a857371e0e34d0365313b8bfb57e7486-merged.mount: Deactivated successfully.
Nov 25 09:49:35 compute-0 podman[255930]: 2025-11-25 09:49:35.574308806 +0000 UTC m=+0.102699949 container remove 5a0a24a37c1420e2212d7558979f1904e61264c2a8405b7d14716dab8d54a74c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 09:49:35 compute-0 podman[255930]: 2025-11-25 09:49:35.487063191 +0000 UTC m=+0.015454354 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:49:35 compute-0 systemd[1]: libpod-conmon-5a0a24a37c1420e2212d7558979f1904e61264c2a8405b7d14716dab8d54a74c.scope: Deactivated successfully.
Nov 25 09:49:35 compute-0 podman[255965]: 2025-11-25 09:49:35.693998661 +0000 UTC m=+0.031817149 container create 9be25f1d24d15077f5d0fe3cff0e4d0ea5215ff855a9c29304b63852415b418a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wilbur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:49:35 compute-0 systemd[1]: Started libpod-conmon-9be25f1d24d15077f5d0fe3cff0e4d0ea5215ff855a9c29304b63852415b418a.scope.
Nov 25 09:49:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb145d9a2c55258c05a3969217e95b049950d3997d1a4ccc205b8719059bf41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb145d9a2c55258c05a3969217e95b049950d3997d1a4ccc205b8719059bf41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb145d9a2c55258c05a3969217e95b049950d3997d1a4ccc205b8719059bf41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb145d9a2c55258c05a3969217e95b049950d3997d1a4ccc205b8719059bf41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:49:35 compute-0 podman[255965]: 2025-11-25 09:49:35.748646327 +0000 UTC m=+0.086464835 container init 9be25f1d24d15077f5d0fe3cff0e4d0ea5215ff855a9c29304b63852415b418a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 25 09:49:35 compute-0 podman[255965]: 2025-11-25 09:49:35.754423687 +0000 UTC m=+0.092242175 container start 9be25f1d24d15077f5d0fe3cff0e4d0ea5215ff855a9c29304b63852415b418a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wilbur, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:49:35 compute-0 podman[255965]: 2025-11-25 09:49:35.755575569 +0000 UTC m=+0.093394057 container attach 9be25f1d24d15077f5d0fe3cff0e4d0ea5215ff855a9c29304b63852415b418a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wilbur, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:49:35 compute-0 podman[255965]: 2025-11-25 09:49:35.683329158 +0000 UTC m=+0.021147666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:49:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:35.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:36 compute-0 infallible_wilbur[255979]: {}
Nov 25 09:49:36 compute-0 lvm[256057]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:49:36 compute-0 lvm[256057]: VG ceph_vg0 finished
Nov 25 09:49:36 compute-0 systemd[1]: libpod-9be25f1d24d15077f5d0fe3cff0e4d0ea5215ff855a9c29304b63852415b418a.scope: Deactivated successfully.
Nov 25 09:49:36 compute-0 podman[256058]: 2025-11-25 09:49:36.243725794 +0000 UTC m=+0.018532858 container died 9be25f1d24d15077f5d0fe3cff0e4d0ea5215ff855a9c29304b63852415b418a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:49:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cb145d9a2c55258c05a3969217e95b049950d3997d1a4ccc205b8719059bf41-merged.mount: Deactivated successfully.
Nov 25 09:49:36 compute-0 podman[256058]: 2025-11-25 09:49:36.261703553 +0000 UTC m=+0.036510597 container remove 9be25f1d24d15077f5d0fe3cff0e4d0ea5215ff855a9c29304b63852415b418a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wilbur, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 25 09:49:36 compute-0 systemd[1]: libpod-conmon-9be25f1d24d15077f5d0fe3cff0e4d0ea5215ff855a9c29304b63852415b418a.scope: Deactivated successfully.
Nov 25 09:49:36 compute-0 sudo[255874]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:49:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:36 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:49:36 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:36 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.24757 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 09:49:36 compute-0 ceph-mgr[74476]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 09:49:36 compute-0 ceph-mgr[74476]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 09:49:36 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.24760 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 09:49:36 compute-0 ceph-mgr[74476]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 09:49:36 compute-0 ceph-mgr[74476]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 09:49:36 compute-0 sudo[256071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:49:36 compute-0 sudo[256071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:36 compute-0 sudo[256071]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:36 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.24757 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 25 09:49:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:36 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:36 compute-0 ceph-mon[74207]: pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:36 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:36 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:49:36 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3958313225' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 09:49:36 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/2382711148' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 09:49:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:36 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:37.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:37.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:37.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:37.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:49:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:37 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:37.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:37 compute-0 ceph-mon[74207]: from='client.24757 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 09:49:37 compute-0 ceph-mon[74207]: from='client.24760 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 09:49:37 compute-0 ceph-mon[74207]: from='client.24757 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 25 09:49:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:37.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:38 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:38 compute-0 ceph-mon[74207]: pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:49:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:38 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:39 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:39.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:39.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:49:40] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:49:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:49:40] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:49:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:40 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:40 compute-0 ceph-mon[74207]: pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:40 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:40 compute-0 podman[256100]: 2025-11-25 09:49:40.999350832 +0000 UTC m=+0.062014644 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 09:49:41 compute-0 sudo[256116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:49:41 compute-0 sudo[256116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:49:41 compute-0 sudo[256116]: pam_unix(sudo:session): session closed for user root
Nov 25 09:49:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:41 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9240060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:41.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:41.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:42 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:42 compute-0 ceph-mon[74207]: pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:42 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:43 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:43.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:43 compute-0 ceph-mon[74207]: pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:43.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:44 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:44 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:49:44
Nov 25 09:49:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:49:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:49:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'default.rgw.control', 'volumes', '.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', '.nfs', 'vms', 'backups', 'cephfs.cephfs.meta']
Nov 25 09:49:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:49:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:49:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:49:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:49:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:49:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:49:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:49:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:49:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:49:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:49:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:49:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:49:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:49:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:49:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:49:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:49:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:49:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:49:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:49:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:49:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:45 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff928002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:45.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:45.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:46 compute-0 ceph-mon[74207]: pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:46 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924006e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:46 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff928002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:47.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:47.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:47.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:47.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:49:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:47 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:47.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:47.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:48 compute-0 ceph-mon[74207]: pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:49:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:48 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:48 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:48 compute-0 podman[256151]: 2025-11-25 09:49:48.985547134 +0000 UTC m=+0.052729471 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:49:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:49 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91000d9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:49.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:49.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:49:50] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:49:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:49:50] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:49:50 compute-0 ceph-mon[74207]: pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:50 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9280035e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:50 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9280035e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:49:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[251299]: 25/11/2025 09:49:51 : epoch 69257b51 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9280035e0 fd 38 proxy ignored for local
Nov 25 09:49:51 compute-0 kernel: ganesha.nfsd[256146]: segfault at 50 ip 00007ff9be19c32e sp 00007ff9837fd210 error 4 in libntirpc.so.5.8[7ff9be181000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 25 09:49:51 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:49:51 compute-0 systemd[1]: Started Process Core Dump (PID 256176/UID 0).
Nov 25 09:49:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:51.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:52 compute-0 ceph-mon[74207]: pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:52 compute-0 systemd-coredump[256177]: Process 251303 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 61:
                                                    #0  0x00007ff9be19c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:49:52 compute-0 systemd[1]: systemd-coredump@8-256176-0.service: Deactivated successfully.
Nov 25 09:49:52 compute-0 systemd[1]: systemd-coredump@8-256176-0.service: Consumed 1.035s CPU time.
Nov 25 09:49:52 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:49:52 compute-0 podman[256186]: 2025-11-25 09:49:52.755543709 +0000 UTC m=+0.019726677 container died bf08aebaf45a5f98995f2aa0990acb39be3ad8282b92f73ae3cb21c6130e2d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:49:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdd20b6ddf52584e26d5b576847830cb5440711394c81642e12bb190aa6d5683-merged.mount: Deactivated successfully.
Nov 25 09:49:52 compute-0 podman[256186]: 2025-11-25 09:49:52.775498735 +0000 UTC m=+0.039681704 container remove bf08aebaf45a5f98995f2aa0990acb39be3ad8282b92f73ae3cb21c6130e2d6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 25 09:49:52 compute-0 podman[256181]: 2025-11-25 09:49:52.777632919 +0000 UTC m=+0.047332958 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:49:52 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:49:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:52 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:49:52 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Consumed 1.087s CPU time.
Nov 25 09:49:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:53.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:54 compute-0 ceph-mon[74207]: pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1000117064' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:49:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1000117064' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:49:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:49:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:55.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:55.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:56 compute-0 ceph-mon[74207]: pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:49:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:57.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:57.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:57.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:49:57.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:49:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:49:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/094957 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:49:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:57.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:49:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:57.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:58 compute-0 ceph-mon[74207]: pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:49:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:49:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:49:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:59.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:49:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:49:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:49:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:59.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:49:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:49:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:00 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 25 09:50:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:50:00] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 25 09:50:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:50:00] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 25 09:50:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095000 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:50:00 compute-0 ceph-mon[74207]: pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:50:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:00 compute-0 ceph-mon[74207]: overall HEALTH_OK
Nov 25 09:50:01 compute-0 sudo[256243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:50:01 compute-0 sudo[256243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:01 compute-0 sudo[256243]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:50:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:01.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:01.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:02 compute-0 ceph-mon[74207]: pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:50:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:02 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 9.
Nov 25 09:50:02 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:50:02 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Consumed 1.087s CPU time.
Nov 25 09:50:02 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:50:03 compute-0 podman[256308]: 2025-11-25 09:50:03.071932442 +0000 UTC m=+0.029281328 container create 944dada5a2a8753b215fe568c16e778485e03736646e5e07b9a04a882e698a61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a2f059bdad4f1d59fe2188cfb8b0a2d1330df29e61c1bb0b3adf8f6b1eac3d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a2f059bdad4f1d59fe2188cfb8b0a2d1330df29e61c1bb0b3adf8f6b1eac3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a2f059bdad4f1d59fe2188cfb8b0a2d1330df29e61c1bb0b3adf8f6b1eac3d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a2f059bdad4f1d59fe2188cfb8b0a2d1330df29e61c1bb0b3adf8f6b1eac3d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:03 compute-0 podman[256308]: 2025-11-25 09:50:03.108878318 +0000 UTC m=+0.066227224 container init 944dada5a2a8753b215fe568c16e778485e03736646e5e07b9a04a882e698a61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:50:03 compute-0 podman[256308]: 2025-11-25 09:50:03.115191379 +0000 UTC m=+0.072540265 container start 944dada5a2a8753b215fe568c16e778485e03736646e5e07b9a04a882e698a61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:50:03 compute-0 bash[256308]: 944dada5a2a8753b215fe568c16e778485e03736646e5e07b9a04a882e698a61
Nov 25 09:50:03 compute-0 podman[256308]: 2025-11-25 09:50:03.059619892 +0000 UTC m=+0.016968798 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:50:03 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:50:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:03 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:50:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:03 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:50:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:03 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:50:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:03 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:50:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:03 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:50:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:03 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:50:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:03 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:50:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:03 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:50:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:50:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:03.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:03.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:04 compute-0 ceph-mon[74207]: pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:50:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:50:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:50:05.377 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:50:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:50:05.377 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:50:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:50:05.377 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:50:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:05.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:05.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:06 compute-0 ceph-mon[74207]: pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:50:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:07.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:07.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:07.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:07.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s
Nov 25 09:50:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:07.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:07.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:08 compute-0 ceph-mon[74207]: pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s
Nov 25 09:50:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:09 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:50:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:09 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:50:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s
Nov 25 09:50:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:09.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:09.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:50:10] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 25 09:50:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:50:10] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 25 09:50:10 compute-0 ceph-mon[74207]: pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s
Nov 25 09:50:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 25 09:50:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:11.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:11.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:12 compute-0 podman[256372]: 2025-11-25 09:50:12.008405004 +0000 UTC m=+0.045216998 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=0
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:12 compute-0 ceph-mon[74207]: pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 25 09:50:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 25 09:50:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:13 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:13.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:13.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:14 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c002380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:14 compute-0 ceph-mon[74207]: pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 25 09:50:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:14 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7510002340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:50:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:50:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:50:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:50:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:50:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:50:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:50:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095015 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:50:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [NOTICE] 328/095015 (4) : haproxy version is 2.3.17-d1c9119
Nov 25 09:50:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [NOTICE] 328/095015 (4) : path to executable is /usr/local/sbin/haproxy
Nov 25 09:50:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [ALERT] 328/095015 (4) : backend 'backend' has no server available!
Nov 25 09:50:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:15 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:50:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 25 09:50:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095015 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:50:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:15 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7510002340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:15.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:15.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:16 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514002540 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:16 compute-0 nova_compute[253512]: 2025-11-25 09:50:16.594 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-0 nova_compute[253512]: 2025-11-25 09:50:16.595 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-0 nova_compute[253512]: 2025-11-25 09:50:16.606 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-0 nova_compute[253512]: 2025-11-25 09:50:16.607 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-0 nova_compute[253512]: 2025-11-25 09:50:16.607 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-0 nova_compute[253512]: 2025-11-25 09:50:16.607 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:16 compute-0 nova_compute[253512]: 2025-11-25 09:50:16.607 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:50:16 compute-0 ceph-mon[74207]: pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 25 09:50:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4025101835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:16 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c002e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:17.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:17.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:17.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:17.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.481 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.481 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.482 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.482 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:50:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:17 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100031d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.504 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.504 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.504 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.504 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.505 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:50:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:17.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3765179905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3025817659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2335408978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:50:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1890353352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:17 compute-0 nova_compute[253512]: 2025-11-25 09:50:17.850 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:50:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:17.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:18 compute-0 nova_compute[253512]: 2025-11-25 09:50:18.072 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:50:18 compute-0 nova_compute[253512]: 2025-11-25 09:50:18.074 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4943MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:50:18 compute-0 nova_compute[253512]: 2025-11-25 09:50:18.074 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:50:18 compute-0 nova_compute[253512]: 2025-11-25 09:50:18.074 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:50:18 compute-0 nova_compute[253512]: 2025-11-25 09:50:18.131 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:50:18 compute-0 nova_compute[253512]: 2025-11-25 09:50:18.132 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:50:18 compute-0 nova_compute[253512]: 2025-11-25 09:50:18.376 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:50:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:18 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100031d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:50:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3082497368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:18 compute-0 nova_compute[253512]: 2025-11-25 09:50:18.767 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:50:18 compute-0 ceph-mon[74207]: pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:50:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1890353352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3082497368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:50:18 compute-0 nova_compute[253512]: 2025-11-25 09:50:18.774 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:50:18 compute-0 nova_compute[253512]: 2025-11-25 09:50:18.787 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:50:18 compute-0 nova_compute[253512]: 2025-11-25 09:50:18.788 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:50:18 compute-0 nova_compute[253512]: 2025-11-25 09:50:18.789 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:50:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:18 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100031d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 25 09:50:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:19 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c002e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:19.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:19.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:20 compute-0 podman[256455]: 2025-11-25 09:50:20.008184912 +0000 UTC m=+0.064600478 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 09:50:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:50:20] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Nov 25 09:50:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:50:20] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Nov 25 09:50:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095020 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:50:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:20 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100031d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:20 compute-0 ceph-mon[74207]: pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 25 09:50:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:20 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514002e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:20 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:50:20.960 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:50:20 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:50:20.961 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:50:20 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:50:20.962 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:50:21 compute-0 sudo[256477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:50:21 compute-0 sudo[256477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:21 compute-0 sudo[256477]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:50:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:21 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:21.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:21.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:22 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c002e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:22 compute-0 ceph-mon[74207]: pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:50:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:22 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:50:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:22 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c002e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:22 compute-0 podman[256504]: 2025-11-25 09:50:22.986467799 +0000 UTC m=+0.048172491 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:50:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Nov 25 09:50:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:23 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514002e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:23.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:23.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:24 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:24 compute-0 ceph-mon[74207]: pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Nov 25 09:50:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:24 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Nov 25 09:50:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:25 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:25.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:25 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:50:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:25 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:50:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:25.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:26 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514002e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:26 compute-0 ceph-mon[74207]: pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Nov 25 09:50:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:26 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514002e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:27.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:27.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:27.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:27.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 25 09:50:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:27 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:27.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:27.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:28 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:28 compute-0 ceph-mon[74207]: pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 25 09:50:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:28 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:50:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:28 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:50:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:29 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:29.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:50:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:29.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:50:30] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:50:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:50:30] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:50:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:30 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:30 compute-0 ceph-mon[74207]: pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:50:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:30 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:50:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:31 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:31.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:31 compute-0 ceph-mon[74207]: pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 25 09:50:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:31.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:32 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520002270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:32 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c005410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:50:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:33 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:33.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:33.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:34 compute-0 ceph-mon[74207]: pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:50:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:34 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:34 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520002270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095035 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:50:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:50:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:35 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c005410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:35.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:36 compute-0 ceph-mon[74207]: pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 25 09:50:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:36 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:36 compute-0 sudo[256536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:50:36 compute-0 sudo[256536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:36 compute-0 sudo[256536]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:36 compute-0 sudo[256561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:50:36 compute-0 sudo[256561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:36 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:36 compute-0 sudo[256561]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:37.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:37.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:37.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:37.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:50:37 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:50:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:50:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:50:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:50:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:50:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:50:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:50:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:50:37 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:50:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:50:37 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:50:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:50:37 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:50:37 compute-0 sudo[256615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:50:37 compute-0 sudo[256615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:37 compute-0 sudo[256615]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:37 compute-0 sudo[256640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:50:37 compute-0 sudo[256640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:50:37 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:50:37 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:50:37 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:50:37 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:50:37 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:50:37 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:50:37 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:50:37 compute-0 podman[256697]: 2025-11-25 09:50:37.474198356 +0000 UTC m=+0.031353956 container create 94475c86e28fd37506472182834a8d25026071528d1583fdde9c744d92788a2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mclean, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:50:37 compute-0 systemd[1]: Started libpod-conmon-94475c86e28fd37506472182834a8d25026071528d1583fdde9c744d92788a2b.scope.
Nov 25 09:50:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:37 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520002270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:50:37 compute-0 podman[256697]: 2025-11-25 09:50:37.526621819 +0000 UTC m=+0.083777438 container init 94475c86e28fd37506472182834a8d25026071528d1583fdde9c744d92788a2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mclean, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:50:37 compute-0 podman[256697]: 2025-11-25 09:50:37.531338008 +0000 UTC m=+0.088493609 container start 94475c86e28fd37506472182834a8d25026071528d1583fdde9c744d92788a2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:50:37 compute-0 podman[256697]: 2025-11-25 09:50:37.532590941 +0000 UTC m=+0.089746541 container attach 94475c86e28fd37506472182834a8d25026071528d1583fdde9c744d92788a2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mclean, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:50:37 compute-0 eloquent_mclean[256710]: 167 167
Nov 25 09:50:37 compute-0 systemd[1]: libpod-94475c86e28fd37506472182834a8d25026071528d1583fdde9c744d92788a2b.scope: Deactivated successfully.
Nov 25 09:50:37 compute-0 podman[256697]: 2025-11-25 09:50:37.535898426 +0000 UTC m=+0.093054026 container died 94475c86e28fd37506472182834a8d25026071528d1583fdde9c744d92788a2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:50:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9fdc9907246218d4290dbc46d77bdb5c85cdb7a00e68e81236ecaaf078da9c0-merged.mount: Deactivated successfully.
Nov 25 09:50:37 compute-0 podman[256697]: 2025-11-25 09:50:37.55870565 +0000 UTC m=+0.115861249 container remove 94475c86e28fd37506472182834a8d25026071528d1583fdde9c744d92788a2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:50:37 compute-0 podman[256697]: 2025-11-25 09:50:37.463296334 +0000 UTC m=+0.020451954 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:50:37 compute-0 systemd[1]: libpod-conmon-94475c86e28fd37506472182834a8d25026071528d1583fdde9c744d92788a2b.scope: Deactivated successfully.
Nov 25 09:50:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:37.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:37 compute-0 podman[256732]: 2025-11-25 09:50:37.676155692 +0000 UTC m=+0.028168541 container create 1989c7f28f4f100fa0f3b50afe3b086591bbb15ac6b18d37ace41213efc4b752 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_solomon, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:50:37 compute-0 systemd[1]: Started libpod-conmon-1989c7f28f4f100fa0f3b50afe3b086591bbb15ac6b18d37ace41213efc4b752.scope.
Nov 25 09:50:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67800fb67a5c7156e6b1037127966bdcc61040671642eabdaa6c9c70017c6329/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67800fb67a5c7156e6b1037127966bdcc61040671642eabdaa6c9c70017c6329/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67800fb67a5c7156e6b1037127966bdcc61040671642eabdaa6c9c70017c6329/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67800fb67a5c7156e6b1037127966bdcc61040671642eabdaa6c9c70017c6329/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67800fb67a5c7156e6b1037127966bdcc61040671642eabdaa6c9c70017c6329/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:37 compute-0 podman[256732]: 2025-11-25 09:50:37.737232227 +0000 UTC m=+0.089245096 container init 1989c7f28f4f100fa0f3b50afe3b086591bbb15ac6b18d37ace41213efc4b752 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:50:37 compute-0 podman[256732]: 2025-11-25 09:50:37.742054698 +0000 UTC m=+0.094067547 container start 1989c7f28f4f100fa0f3b50afe3b086591bbb15ac6b18d37ace41213efc4b752 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 09:50:37 compute-0 podman[256732]: 2025-11-25 09:50:37.743233289 +0000 UTC m=+0.095246138 container attach 1989c7f28f4f100fa0f3b50afe3b086591bbb15ac6b18d37ace41213efc4b752 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:50:37 compute-0 podman[256732]: 2025-11-25 09:50:37.664726487 +0000 UTC m=+0.016739356 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:50:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:37.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:37 compute-0 strange_solomon[256746]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:50:37 compute-0 strange_solomon[256746]: --> All data devices are unavailable
Nov 25 09:50:37 compute-0 systemd[1]: libpod-1989c7f28f4f100fa0f3b50afe3b086591bbb15ac6b18d37ace41213efc4b752.scope: Deactivated successfully.
Nov 25 09:50:38 compute-0 podman[256762]: 2025-11-25 09:50:38.02762383 +0000 UTC m=+0.017207338 container died 1989c7f28f4f100fa0f3b50afe3b086591bbb15ac6b18d37ace41213efc4b752 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:50:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-67800fb67a5c7156e6b1037127966bdcc61040671642eabdaa6c9c70017c6329-merged.mount: Deactivated successfully.
Nov 25 09:50:38 compute-0 podman[256762]: 2025-11-25 09:50:38.049779074 +0000 UTC m=+0.039362583 container remove 1989c7f28f4f100fa0f3b50afe3b086591bbb15ac6b18d37ace41213efc4b752 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:50:38 compute-0 systemd[1]: libpod-conmon-1989c7f28f4f100fa0f3b50afe3b086591bbb15ac6b18d37ace41213efc4b752.scope: Deactivated successfully.
Nov 25 09:50:38 compute-0 sudo[256640]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:38 compute-0 sudo[256774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:50:38 compute-0 sudo[256774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:38 compute-0 sudo[256774]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:38 compute-0 sudo[256799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:50:38 compute-0 sudo[256799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:38 compute-0 ceph-mon[74207]: pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:50:38 compute-0 podman[256856]: 2025-11-25 09:50:38.461582229 +0000 UTC m=+0.028275402 container create 1c12e4217b8a981a3dec01c524f681597dd06ca0ea676e293cdc1474a771eb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:50:38 compute-0 systemd[1]: Started libpod-conmon-1c12e4217b8a981a3dec01c524f681597dd06ca0ea676e293cdc1474a771eb18.scope.
Nov 25 09:50:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:50:38 compute-0 podman[256856]: 2025-11-25 09:50:38.516825427 +0000 UTC m=+0.083518589 container init 1c12e4217b8a981a3dec01c524f681597dd06ca0ea676e293cdc1474a771eb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:50:38 compute-0 podman[256856]: 2025-11-25 09:50:38.521383449 +0000 UTC m=+0.088076602 container start 1c12e4217b8a981a3dec01c524f681597dd06ca0ea676e293cdc1474a771eb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_bouman, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:50:38 compute-0 podman[256856]: 2025-11-25 09:50:38.522485196 +0000 UTC m=+0.089178359 container attach 1c12e4217b8a981a3dec01c524f681597dd06ca0ea676e293cdc1474a771eb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:50:38 compute-0 competent_bouman[256869]: 167 167
Nov 25 09:50:38 compute-0 systemd[1]: libpod-1c12e4217b8a981a3dec01c524f681597dd06ca0ea676e293cdc1474a771eb18.scope: Deactivated successfully.
Nov 25 09:50:38 compute-0 podman[256856]: 2025-11-25 09:50:38.524920035 +0000 UTC m=+0.091613208 container died 1c12e4217b8a981a3dec01c524f681597dd06ca0ea676e293cdc1474a771eb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:50:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:38 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d55b6be3c5829c6c61b38039b75eaab80eb6b5855e2727cb12ffc2e64d906726-merged.mount: Deactivated successfully.
Nov 25 09:50:38 compute-0 podman[256856]: 2025-11-25 09:50:38.544379549 +0000 UTC m=+0.111072712 container remove 1c12e4217b8a981a3dec01c524f681597dd06ca0ea676e293cdc1474a771eb18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_bouman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:50:38 compute-0 podman[256856]: 2025-11-25 09:50:38.449395866 +0000 UTC m=+0.016089050 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:50:38 compute-0 systemd[1]: libpod-conmon-1c12e4217b8a981a3dec01c524f681597dd06ca0ea676e293cdc1474a771eb18.scope: Deactivated successfully.
Nov 25 09:50:38 compute-0 podman[256890]: 2025-11-25 09:50:38.666687009 +0000 UTC m=+0.029239659 container create 3d761b76f4d44247a4b1d45893f04befa5e493418a90049c1be2546cd2da1dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poitras, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:50:38 compute-0 systemd[1]: Started libpod-conmon-3d761b76f4d44247a4b1d45893f04befa5e493418a90049c1be2546cd2da1dc9.scope.
Nov 25 09:50:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e373a8697457e375886f852a5a16126938341888e72aa103c265c18f64e68ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e373a8697457e375886f852a5a16126938341888e72aa103c265c18f64e68ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e373a8697457e375886f852a5a16126938341888e72aa103c265c18f64e68ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e373a8697457e375886f852a5a16126938341888e72aa103c265c18f64e68ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:38 compute-0 podman[256890]: 2025-11-25 09:50:38.724726148 +0000 UTC m=+0.087278799 container init 3d761b76f4d44247a4b1d45893f04befa5e493418a90049c1be2546cd2da1dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poitras, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:50:38 compute-0 podman[256890]: 2025-11-25 09:50:38.729576069 +0000 UTC m=+0.092128721 container start 3d761b76f4d44247a4b1d45893f04befa5e493418a90049c1be2546cd2da1dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:50:38 compute-0 podman[256890]: 2025-11-25 09:50:38.730778576 +0000 UTC m=+0.093331227 container attach 3d761b76f4d44247a4b1d45893f04befa5e493418a90049c1be2546cd2da1dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:50:38 compute-0 podman[256890]: 2025-11-25 09:50:38.655397306 +0000 UTC m=+0.017949977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:50:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:38 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c005410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:38 compute-0 agitated_poitras[256903]: {
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:     "1": [
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:         {
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             "devices": [
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "/dev/loop3"
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             ],
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             "lv_name": "ceph_lv0",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             "lv_size": "21470642176",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             "name": "ceph_lv0",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             "tags": {
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.cluster_name": "ceph",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.crush_device_class": "",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.encrypted": "0",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.osd_id": "1",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.type": "block",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.vdo": "0",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:                 "ceph.with_tpm": "0"
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             },
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             "type": "block",
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:             "vg_name": "ceph_vg0"
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:         }
Nov 25 09:50:38 compute-0 agitated_poitras[256903]:     ]
Nov 25 09:50:38 compute-0 agitated_poitras[256903]: }
Nov 25 09:50:38 compute-0 systemd[1]: libpod-3d761b76f4d44247a4b1d45893f04befa5e493418a90049c1be2546cd2da1dc9.scope: Deactivated successfully.
Nov 25 09:50:38 compute-0 podman[256890]: 2025-11-25 09:50:38.983688714 +0000 UTC m=+0.346241375 container died 3d761b76f4d44247a4b1d45893f04befa5e493418a90049c1be2546cd2da1dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poitras, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:50:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e373a8697457e375886f852a5a16126938341888e72aa103c265c18f64e68ee-merged.mount: Deactivated successfully.
Nov 25 09:50:39 compute-0 podman[256890]: 2025-11-25 09:50:39.008604301 +0000 UTC m=+0.371156953 container remove 3d761b76f4d44247a4b1d45893f04befa5e493418a90049c1be2546cd2da1dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:50:39 compute-0 systemd[1]: libpod-conmon-3d761b76f4d44247a4b1d45893f04befa5e493418a90049c1be2546cd2da1dc9.scope: Deactivated successfully.
Nov 25 09:50:39 compute-0 sudo[256799]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:39 compute-0 sudo[256922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:50:39 compute-0 sudo[256922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:39 compute-0 sudo[256922]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:39 compute-0 sudo[256947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:50:39 compute-0 sudo[256947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:50:39 compute-0 podman[257003]: 2025-11-25 09:50:39.430356129 +0000 UTC m=+0.027044532 container create fbca74a81015e14582f4d8e740fbdc3e15d760321d36bbe36d46a54cca76053d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:50:39 compute-0 systemd[1]: Started libpod-conmon-fbca74a81015e14582f4d8e740fbdc3e15d760321d36bbe36d46a54cca76053d.scope.
Nov 25 09:50:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:50:39 compute-0 podman[257003]: 2025-11-25 09:50:39.480708689 +0000 UTC m=+0.077397091 container init fbca74a81015e14582f4d8e740fbdc3e15d760321d36bbe36d46a54cca76053d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:50:39 compute-0 podman[257003]: 2025-11-25 09:50:39.48561114 +0000 UTC m=+0.082299542 container start fbca74a81015e14582f4d8e740fbdc3e15d760321d36bbe36d46a54cca76053d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_greider, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:50:39 compute-0 vigilant_greider[257016]: 167 167
Nov 25 09:50:39 compute-0 podman[257003]: 2025-11-25 09:50:39.488580638 +0000 UTC m=+0.085269041 container attach fbca74a81015e14582f4d8e740fbdc3e15d760321d36bbe36d46a54cca76053d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_greider, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:50:39 compute-0 systemd[1]: libpod-fbca74a81015e14582f4d8e740fbdc3e15d760321d36bbe36d46a54cca76053d.scope: Deactivated successfully.
Nov 25 09:50:39 compute-0 podman[257003]: 2025-11-25 09:50:39.48923939 +0000 UTC m=+0.085927791 container died fbca74a81015e14582f4d8e740fbdc3e15d760321d36bbe36d46a54cca76053d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_greider, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:50:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-817afabf112edd75f34bd6066b5bf774e29f206702c39ce4a6252129d2a6e217-merged.mount: Deactivated successfully.
Nov 25 09:50:39 compute-0 podman[257003]: 2025-11-25 09:50:39.508818999 +0000 UTC m=+0.105507401 container remove fbca74a81015e14582f4d8e740fbdc3e15d760321d36bbe36d46a54cca76053d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:50:39 compute-0 podman[257003]: 2025-11-25 09:50:39.419716313 +0000 UTC m=+0.016404735 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:50:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:39 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c005410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:39 compute-0 systemd[1]: libpod-conmon-fbca74a81015e14582f4d8e740fbdc3e15d760321d36bbe36d46a54cca76053d.scope: Deactivated successfully.
Nov 25 09:50:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:39.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:39 compute-0 podman[257038]: 2025-11-25 09:50:39.632873582 +0000 UTC m=+0.030304948 container create 7476a486a060acf3d5f088fa4a5035faba3da67261f900c58b4f0190a16290df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 25 09:50:39 compute-0 systemd[1]: Started libpod-conmon-7476a486a060acf3d5f088fa4a5035faba3da67261f900c58b4f0190a16290df.scope.
Nov 25 09:50:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65bc0549b6278c937ae3deb83c3403a8481fabb0ef739047d373e6fdfccfc65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65bc0549b6278c937ae3deb83c3403a8481fabb0ef739047d373e6fdfccfc65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65bc0549b6278c937ae3deb83c3403a8481fabb0ef739047d373e6fdfccfc65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65bc0549b6278c937ae3deb83c3403a8481fabb0ef739047d373e6fdfccfc65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:50:39 compute-0 podman[257038]: 2025-11-25 09:50:39.709222547 +0000 UTC m=+0.106653913 container init 7476a486a060acf3d5f088fa4a5035faba3da67261f900c58b4f0190a16290df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_mendel, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:50:39 compute-0 podman[257038]: 2025-11-25 09:50:39.713957102 +0000 UTC m=+0.111388458 container start 7476a486a060acf3d5f088fa4a5035faba3da67261f900c58b4f0190a16290df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:50:39 compute-0 podman[257038]: 2025-11-25 09:50:39.715114104 +0000 UTC m=+0.112545480 container attach 7476a486a060acf3d5f088fa4a5035faba3da67261f900c58b4f0190a16290df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_mendel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:50:39 compute-0 podman[257038]: 2025-11-25 09:50:39.620230549 +0000 UTC m=+0.017661926 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:50:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:39.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:40 compute-0 peaceful_mendel[257051]: {}
Nov 25 09:50:40 compute-0 lvm[257130]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:50:40 compute-0 lvm[257130]: VG ceph_vg0 finished
Nov 25 09:50:40 compute-0 systemd[1]: libpod-7476a486a060acf3d5f088fa4a5035faba3da67261f900c58b4f0190a16290df.scope: Deactivated successfully.
Nov 25 09:50:40 compute-0 podman[257038]: 2025-11-25 09:50:40.218035523 +0000 UTC m=+0.615466899 container died 7476a486a060acf3d5f088fa4a5035faba3da67261f900c58b4f0190a16290df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_mendel, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:50:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f65bc0549b6278c937ae3deb83c3403a8481fabb0ef739047d373e6fdfccfc65-merged.mount: Deactivated successfully.
Nov 25 09:50:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:50:40] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:50:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:50:40] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Nov 25 09:50:40 compute-0 podman[257038]: 2025-11-25 09:50:40.241349331 +0000 UTC m=+0.638780687 container remove 7476a486a060acf3d5f088fa4a5035faba3da67261f900c58b4f0190a16290df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:50:40 compute-0 systemd[1]: libpod-conmon-7476a486a060acf3d5f088fa4a5035faba3da67261f900c58b4f0190a16290df.scope: Deactivated successfully.
Nov 25 09:50:40 compute-0 sudo[256947]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:50:40 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:50:40 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:50:40 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:50:40 compute-0 sudo[257140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:50:40 compute-0 sudo[257140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:40 compute-0 sudo[257140]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:40 compute-0 ceph-mon[74207]: pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:50:40 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:50:40 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:50:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:40 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75200095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:40 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:41 compute-0 sudo[257165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:50:41 compute-0 sudo[257165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:50:41 compute-0 sudo[257165]: pam_unix(sudo:session): session closed for user root
Nov 25 09:50:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:50:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:41 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:41.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:41.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:42 compute-0 ceph-mon[74207]: pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:50:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:42 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c005410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:42 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f751c005410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:42 compute-0 podman[257192]: 2025-11-25 09:50:42.984479525 +0000 UTC m=+0.042968431 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 09:50:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:50:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:43 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:43.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:43.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:44 compute-0 ceph-mon[74207]: pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:50:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:44 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:44 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752c002600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:50:44
Nov 25 09:50:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:50:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:50:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.log', '.nfs', 'volumes']
Nov 25 09:50:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:50:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:50:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:50:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:50:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:50:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:50:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:50:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:50:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:50:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:50:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:50:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:50:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:50:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:50:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:50:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:50:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:50:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:50:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:50:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:45 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:45.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:45.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:46 compute-0 ceph-mon[74207]: pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:50:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:46 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:46 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:47.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:47.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:47.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:47.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:50:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:47 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752c003140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:47.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:47.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:48 compute-0 ceph-mon[74207]: pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 25 09:50:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:48 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:48 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:50:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:49 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:49.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:49.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:50:50] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:50:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:50:50] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:50:50 compute-0 ceph-mon[74207]: pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:50:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:50 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752c003140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:50 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:50 compute-0 podman[257218]: 2025-11-25 09:50:50.998016374 +0000 UTC m=+0.057519884 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 25 09:50:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:50:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:51 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:51.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:51.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:52 compute-0 ceph-mon[74207]: pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 25 09:50:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:52 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:52 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752c003140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:50:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:53 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752c003140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:53.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:53 compute-0 podman[257245]: 2025-11-25 09:50:53.981435942 +0000 UTC m=+0.043257664 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:50:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:53.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:54 compute-0 ceph-mon[74207]: pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:50:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1004775845' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:50:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1004775845' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:50:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:54 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:54 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:50:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:50:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:55 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:55.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:55.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095056 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:50:56 compute-0 ceph-mon[74207]: pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:50:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:56 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:56 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:57.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:57.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:57.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:50:57.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:50:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:50:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:57 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752c0049a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:57.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.867449) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257867485, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1232, "num_deletes": 255, "total_data_size": 2188970, "memory_usage": 2221888, "flush_reason": "Manual Compaction"}
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257873098, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2128333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18058, "largest_seqno": 19289, "table_properties": {"data_size": 2122606, "index_size": 3054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12091, "raw_average_key_size": 18, "raw_value_size": 2110976, "raw_average_value_size": 3308, "num_data_blocks": 137, "num_entries": 638, "num_filter_entries": 638, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064148, "oldest_key_time": 1764064148, "file_creation_time": 1764064257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5665 microseconds, and 4200 cpu microseconds.
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.873120) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2128333 bytes OK
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.873131) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.873740) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.873750) EVENT_LOG_v1 {"time_micros": 1764064257873747, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.873760) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2183475, prev total WAL file size 2183475, number of live WAL files 2.
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.874199) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(2078KB)], [38(11MB)]
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257874222, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14213692, "oldest_snapshot_seqno": -1}
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 5085 keys, 13751392 bytes, temperature: kUnknown
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257901994, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13751392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13715732, "index_size": 21854, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 128441, "raw_average_key_size": 25, "raw_value_size": 13621989, "raw_average_value_size": 2678, "num_data_blocks": 904, "num_entries": 5085, "num_filter_entries": 5085, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764064257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.902257) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13751392 bytes
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.906785) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 508.9 rd, 492.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.5 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(13.1) write-amplify(6.5) OK, records in: 5609, records dropped: 524 output_compression: NoCompression
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.906800) EVENT_LOG_v1 {"time_micros": 1764064257906793, "job": 18, "event": "compaction_finished", "compaction_time_micros": 27931, "compaction_time_cpu_micros": 19205, "output_level": 6, "num_output_files": 1, "total_output_size": 13751392, "num_input_records": 5609, "num_output_records": 5085, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257907395, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257909140, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.874147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.909242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.909246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.909247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.909248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:50:57 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:50:57.909249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:50:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:57.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:50:58 compute-0 ceph-mon[74207]: pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:50:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:58 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:58 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:50:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:50:59 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:50:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:50:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:59.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:50:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:50:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:50:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:50:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:50:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:59.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:51:00] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 25 09:51:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:51:00] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 25 09:51:00 compute-0 ceph-mon[74207]: pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:51:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:00 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752c0049a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:00 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7534003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:01 compute-0 sudo[257270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:51:01 compute-0 sudo[257270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:01 compute-0 sudo[257270]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:51:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:01 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:01.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:01.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:02 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:02 compute-0 ceph-mon[74207]: pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 25 09:51:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:02 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:51:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:03 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7534004360 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:03.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:04.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:04 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:04 compute-0 ceph-mon[74207]: pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:51:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:04 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:51:05.379 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:51:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:51:05.379 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:51:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:51:05.379 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:51:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:51:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:05 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:05.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:05 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:51:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:06.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:06 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:06 compute-0 ceph-mon[74207]: pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 09:51:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:06 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:07.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:07.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:07.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:07.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:51:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:07 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:07.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:08.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:08 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7538002600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:08 compute-0 ceph-mon[74207]: pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:51:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:08 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:51:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:08 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:51:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:08 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:51:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:08 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:51:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:09 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:09.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:10.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:51:10] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 25 09:51:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:51:10] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 25 09:51:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:10 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:10 compute-0 ceph-mon[74207]: pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 25 09:51:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:10 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7538003140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:51:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:11 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:51:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:11.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:51:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:11 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:51:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:12.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:12 compute-0 ceph-mon[74207]: pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:51:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:12 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:51:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:13 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7538003140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:13.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:13 compute-0 podman[257311]: 2025-11-25 09:51:13.972469076 +0000 UTC m=+0.039754140 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 09:51:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:14.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:14 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7538003140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:14 compute-0 ceph-mon[74207]: pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:51:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:14 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7538003140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:51:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:51:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:51:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:51:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:51:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:51:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:51:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:51:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:51:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:15 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:51:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:15.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:16.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:16 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:16 compute-0 ceph-mon[74207]: pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:51:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3595711511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1922449101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4009478358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:16 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:17.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:17.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:17.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:17.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 25 09:51:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:17 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:51:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 4295 writes, 19K keys, 4295 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 4295 writes, 4295 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1486 writes, 6060 keys, 1486 commit groups, 1.0 writes per commit group, ingest: 11.01 MB, 0.02 MB/s
                                           Interval WAL: 1486 writes, 1486 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    412.8      0.08              0.05         9    0.008       0      0       0.0       0.0
                                             L6      1/0   13.11 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.1    515.5    434.7      0.23              0.15         8    0.028     36K   4329       0.0       0.0
                                            Sum      1/0   13.11 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.1    386.1    429.2      0.30              0.20        17    0.018     36K   4329       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    422.2    439.4      0.11              0.07         6    0.019     17K   2039       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    515.5    434.7      0.23              0.15         8    0.028     36K   4329       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    421.2      0.07              0.05         8    0.009       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     27.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.031, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.11 MB/s write, 0.11 GB read, 0.10 MB/s read, 0.3 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e6ae573350#2 capacity: 304.00 MB usage: 6.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 8.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(359,6.13 MB,2.01726%) FilterBlock(18,111.30 KB,0.0357527%) IndexBlock(18,218.34 KB,0.0701402%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 09:51:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1680067701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:17.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:17 compute-0 nova_compute[253512]: 2025-11-25 09:51:17.778 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:17 compute-0 nova_compute[253512]: 2025-11-25 09:51:17.778 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:17 compute-0 nova_compute[253512]: 2025-11-25 09:51:17.778 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:17 compute-0 nova_compute[253512]: 2025-11-25 09:51:17.778 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:17 compute-0 nova_compute[253512]: 2025-11-25 09:51:17.779 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:17 compute-0 nova_compute[253512]: 2025-11-25 09:51:17.779 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:51:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:18.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095118 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:51:18 compute-0 nova_compute[253512]: 2025-11-25 09:51:18.468 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:18 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:18 compute-0 ceph-mon[74207]: pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 25 09:51:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:18 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:51:19 compute-0 nova_compute[253512]: 2025-11-25 09:51:19.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:19 compute-0 nova_compute[253512]: 2025-11-25 09:51:19.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:51:19 compute-0 nova_compute[253512]: 2025-11-25 09:51:19.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:51:19 compute-0 nova_compute[253512]: 2025-11-25 09:51:19.484 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:51:19 compute-0 nova_compute[253512]: 2025-11-25 09:51:19.485 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:19 compute-0 nova_compute[253512]: 2025-11-25 09:51:19.485 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:51:19 compute-0 nova_compute[253512]: 2025-11-25 09:51:19.497 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:51:19 compute-0 nova_compute[253512]: 2025-11-25 09:51:19.497 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:51:19 compute-0 nova_compute[253512]: 2025-11-25 09:51:19.497 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:51:19 compute-0 nova_compute[253512]: 2025-11-25 09:51:19.497 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:51:19 compute-0 nova_compute[253512]: 2025-11-25 09:51:19.498 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:51:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:19 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:19.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:51:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315945080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:19 compute-0 nova_compute[253512]: 2025-11-25 09:51:19.843 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:51:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:20.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:20 compute-0 nova_compute[253512]: 2025-11-25 09:51:20.049 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:51:20 compute-0 nova_compute[253512]: 2025-11-25 09:51:20.050 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4960MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:51:20 compute-0 nova_compute[253512]: 2025-11-25 09:51:20.051 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:51:20 compute-0 nova_compute[253512]: 2025-11-25 09:51:20.051 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:51:20 compute-0 nova_compute[253512]: 2025-11-25 09:51:20.110 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:51:20 compute-0 nova_compute[253512]: 2025-11-25 09:51:20.110 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:51:20 compute-0 nova_compute[253512]: 2025-11-25 09:51:20.128 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:51:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:51:20] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 25 09:51:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:51:20] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 25 09:51:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:51:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2179395092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:20 compute-0 nova_compute[253512]: 2025-11-25 09:51:20.467 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:51:20 compute-0 nova_compute[253512]: 2025-11-25 09:51:20.470 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:51:20 compute-0 nova_compute[253512]: 2025-11-25 09:51:20.484 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:51:20 compute-0 nova_compute[253512]: 2025-11-25 09:51:20.485 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:51:20 compute-0 nova_compute[253512]: 2025-11-25 09:51:20.486 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:51:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:20 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7538003140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:20 compute-0 ceph-mon[74207]: pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:51:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1315945080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2179395092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:51:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:20 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:21 compute-0 sudo[257377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:51:21 compute-0 sudo[257377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:21 compute-0 sudo[257377]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:21 compute-0 podman[257401]: 2025-11-25 09:51:21.37051986 +0000 UTC m=+0.058567890 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:51:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:51:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:21 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:21.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:22.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:22 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:22 compute-0 ceph-mon[74207]: pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 25 09:51:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:22 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7538004d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:51:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:23 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7538004d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:23.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:24.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:24 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:24 compute-0 ceph-mon[74207]: pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:51:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:24 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:24 compute-0 podman[257429]: 2025-11-25 09:51:24.977730604 +0000 UTC m=+0.040037914 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:51:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:51:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:25 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:25.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:26.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:26 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7538004d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:26 compute-0 ceph-mon[74207]: pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:51:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:26 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:27.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:27.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:27.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:27.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:51:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:27 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:27.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:28.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:28 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75100057c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:28 compute-0 ceph-mon[74207]: pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:51:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:28 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75380056b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:29 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:29.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:51:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:51:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:30.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:51:30] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:51:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:51:30] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:51:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:30 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:30 compute-0 ceph-mon[74207]: pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:51:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:30 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:31 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:31.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:32.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:32 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75380056b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:32 compute-0 ceph-mon[74207]: pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:32 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75380056b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:33 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75380056b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:33.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:51:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:34.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:51:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:34 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:34 compute-0 ceph-mon[74207]: pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:34 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75380056b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:35 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75380056b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:35.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:36.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:36 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75380056b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:36 compute-0 ceph-mon[74207]: pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:36 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75380056b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:37.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:37.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:37.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:37.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:51:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:37 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75380056b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:37.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:38.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:38 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:38 compute-0 ceph-mon[74207]: pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:51:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:38 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75380056b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:39 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75380056b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:39.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:40.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:51:40] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:51:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:51:40] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:51:40 compute-0 sudo[257463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:51:40 compute-0 sudo[257463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:40 compute-0 sudo[257463]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:40 compute-0 sudo[257488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:51:40 compute-0 sudo[257488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:40 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:40 compute-0 ceph-mon[74207]: pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:40 compute-0 sudo[257488]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:40 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:51:41 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:51:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:51:41 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:51:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:51:41 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:51:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:51:41 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:51:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:51:41 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:51:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:51:41 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:51:41 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:51:41 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:51:41 compute-0 sudo[257542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:51:41 compute-0 sudo[257542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:41 compute-0 sudo[257542]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:41 compute-0 sudo[257567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:51:41 compute-0 sudo[257567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:41 compute-0 sudo[257604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:51:41 compute-0 sudo[257604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:41 compute-0 sudo[257604]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:41 compute-0 podman[257648]: 2025-11-25 09:51:41.441616288 +0000 UTC m=+0.028906303 container create cf624fe8e8337f5c79eaaac291b33342156b765cc00be16b1a1e3b80f78a05d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_joliot, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:51:41 compute-0 systemd[1]: Started libpod-conmon-cf624fe8e8337f5c79eaaac291b33342156b765cc00be16b1a1e3b80f78a05d0.scope.
Nov 25 09:51:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:51:41 compute-0 podman[257648]: 2025-11-25 09:51:41.498661618 +0000 UTC m=+0.085951623 container init cf624fe8e8337f5c79eaaac291b33342156b765cc00be16b1a1e3b80f78a05d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 25 09:51:41 compute-0 podman[257648]: 2025-11-25 09:51:41.503520045 +0000 UTC m=+0.090810050 container start cf624fe8e8337f5c79eaaac291b33342156b765cc00be16b1a1e3b80f78a05d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_joliot, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 09:51:41 compute-0 podman[257648]: 2025-11-25 09:51:41.504655806 +0000 UTC m=+0.091945810 container attach cf624fe8e8337f5c79eaaac291b33342156b765cc00be16b1a1e3b80f78a05d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:51:41 compute-0 agitated_joliot[257661]: 167 167
Nov 25 09:51:41 compute-0 systemd[1]: libpod-cf624fe8e8337f5c79eaaac291b33342156b765cc00be16b1a1e3b80f78a05d0.scope: Deactivated successfully.
Nov 25 09:51:41 compute-0 conmon[257661]: conmon cf624fe8e8337f5c79ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf624fe8e8337f5c79eaaac291b33342156b765cc00be16b1a1e3b80f78a05d0.scope/container/memory.events
Nov 25 09:51:41 compute-0 podman[257648]: 2025-11-25 09:51:41.508500281 +0000 UTC m=+0.095790296 container died cf624fe8e8337f5c79eaaac291b33342156b765cc00be16b1a1e3b80f78a05d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_joliot, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa57fb45307f388ea873b5589765bb823374555a7077ee77d5f04010b1ab6fc8-merged.mount: Deactivated successfully.
Nov 25 09:51:41 compute-0 podman[257648]: 2025-11-25 09:51:41.42981244 +0000 UTC m=+0.017102465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:51:41 compute-0 podman[257648]: 2025-11-25 09:51:41.529326164 +0000 UTC m=+0.116616170 container remove cf624fe8e8337f5c79eaaac291b33342156b765cc00be16b1a1e3b80f78a05d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_joliot, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:51:41 compute-0 systemd[1]: libpod-conmon-cf624fe8e8337f5c79eaaac291b33342156b765cc00be16b1a1e3b80f78a05d0.scope: Deactivated successfully.
Nov 25 09:51:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:41 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7544004da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:41 compute-0 podman[257683]: 2025-11-25 09:51:41.656338659 +0000 UTC m=+0.032067293 container create cb0a11bc70632ddde51cba3071cb8f0dc6944eb0156ad77c1f21f1db2ac3f9cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 09:51:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:41.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:41 compute-0 systemd[1]: Started libpod-conmon-cb0a11bc70632ddde51cba3071cb8f0dc6944eb0156ad77c1f21f1db2ac3f9cf.scope.
Nov 25 09:51:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4500cdcfa344d8882efc623d79cd877e9deb8882002cda9a876c4140c5bbd27c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4500cdcfa344d8882efc623d79cd877e9deb8882002cda9a876c4140c5bbd27c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4500cdcfa344d8882efc623d79cd877e9deb8882002cda9a876c4140c5bbd27c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4500cdcfa344d8882efc623d79cd877e9deb8882002cda9a876c4140c5bbd27c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4500cdcfa344d8882efc623d79cd877e9deb8882002cda9a876c4140c5bbd27c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:41 compute-0 podman[257683]: 2025-11-25 09:51:41.715037224 +0000 UTC m=+0.090765868 container init cb0a11bc70632ddde51cba3071cb8f0dc6944eb0156ad77c1f21f1db2ac3f9cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_booth, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 09:51:41 compute-0 podman[257683]: 2025-11-25 09:51:41.719749346 +0000 UTC m=+0.095477970 container start cb0a11bc70632ddde51cba3071cb8f0dc6944eb0156ad77c1f21f1db2ac3f9cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:51:41 compute-0 podman[257683]: 2025-11-25 09:51:41.72101446 +0000 UTC m=+0.096743084 container attach cb0a11bc70632ddde51cba3071cb8f0dc6944eb0156ad77c1f21f1db2ac3f9cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_booth, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:51:41 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:51:41 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:51:41 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:51:41 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:51:41 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:51:41 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:51:41 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:51:41 compute-0 podman[257683]: 2025-11-25 09:51:41.643363463 +0000 UTC m=+0.019092107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:51:41 compute-0 laughing_booth[257697]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:51:41 compute-0 laughing_booth[257697]: --> All data devices are unavailable
Nov 25 09:51:41 compute-0 systemd[1]: libpod-cb0a11bc70632ddde51cba3071cb8f0dc6944eb0156ad77c1f21f1db2ac3f9cf.scope: Deactivated successfully.
Nov 25 09:51:42 compute-0 podman[257713]: 2025-11-25 09:51:42.01572325 +0000 UTC m=+0.018801419 container died cb0a11bc70632ddde51cba3071cb8f0dc6944eb0156ad77c1f21f1db2ac3f9cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_booth, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:51:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4500cdcfa344d8882efc623d79cd877e9deb8882002cda9a876c4140c5bbd27c-merged.mount: Deactivated successfully.
Nov 25 09:51:42 compute-0 podman[257713]: 2025-11-25 09:51:42.038242035 +0000 UTC m=+0.041320202 container remove cb0a11bc70632ddde51cba3071cb8f0dc6944eb0156ad77c1f21f1db2ac3f9cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:51:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:42.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:42 compute-0 systemd[1]: libpod-conmon-cb0a11bc70632ddde51cba3071cb8f0dc6944eb0156ad77c1f21f1db2ac3f9cf.scope: Deactivated successfully.
Nov 25 09:51:42 compute-0 sudo[257567]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:42 compute-0 sudo[257724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:51:42 compute-0 sudo[257724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:42 compute-0 sudo[257724]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:42 compute-0 sudo[257749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:51:42 compute-0 sudo[257749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:42 compute-0 podman[257806]: 2025-11-25 09:51:42.467390867 +0000 UTC m=+0.032663284 container create 5c96d20edd4faf8e2dab4413ae5c80ca6a51dd15eb0509f36354d4c5080b5301 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_sammet, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 09:51:42 compute-0 systemd[1]: Started libpod-conmon-5c96d20edd4faf8e2dab4413ae5c80ca6a51dd15eb0509f36354d4c5080b5301.scope.
Nov 25 09:51:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:51:42 compute-0 podman[257806]: 2025-11-25 09:51:42.523044053 +0000 UTC m=+0.088316461 container init 5c96d20edd4faf8e2dab4413ae5c80ca6a51dd15eb0509f36354d4c5080b5301 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:51:42 compute-0 podman[257806]: 2025-11-25 09:51:42.528209579 +0000 UTC m=+0.093481987 container start 5c96d20edd4faf8e2dab4413ae5c80ca6a51dd15eb0509f36354d4c5080b5301 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:51:42 compute-0 podman[257806]: 2025-11-25 09:51:42.52926572 +0000 UTC m=+0.094538126 container attach 5c96d20edd4faf8e2dab4413ae5c80ca6a51dd15eb0509f36354d4c5080b5301 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:51:42 compute-0 confident_sammet[257819]: 167 167
Nov 25 09:51:42 compute-0 systemd[1]: libpod-5c96d20edd4faf8e2dab4413ae5c80ca6a51dd15eb0509f36354d4c5080b5301.scope: Deactivated successfully.
Nov 25 09:51:42 compute-0 podman[257806]: 2025-11-25 09:51:42.455260974 +0000 UTC m=+0.020533401 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:51:42 compute-0 podman[257824]: 2025-11-25 09:51:42.559845207 +0000 UTC m=+0.018352333 container died 5c96d20edd4faf8e2dab4413ae5c80ca6a51dd15eb0509f36354d4c5080b5301 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_sammet, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:51:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-62d6bc2df7a76ba9847e451b1bb07668ca737b427c8ef59d2907774c11882212-merged.mount: Deactivated successfully.
Nov 25 09:51:42 compute-0 podman[257824]: 2025-11-25 09:51:42.574799192 +0000 UTC m=+0.033306308 container remove 5c96d20edd4faf8e2dab4413ae5c80ca6a51dd15eb0509f36354d4c5080b5301 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_sammet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Nov 25 09:51:42 compute-0 systemd[1]: libpod-conmon-5c96d20edd4faf8e2dab4413ae5c80ca6a51dd15eb0509f36354d4c5080b5301.scope: Deactivated successfully.
Nov 25 09:51:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:42 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75380056d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:42 compute-0 podman[257843]: 2025-11-25 09:51:42.699787972 +0000 UTC m=+0.029720699 container create db885355bedc5a0ebfb64f31cf38e27a6523b461b5e4a6cab3c5bb1938957205 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:51:42 compute-0 ceph-mon[74207]: pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:42 compute-0 systemd[1]: Started libpod-conmon-db885355bedc5a0ebfb64f31cf38e27a6523b461b5e4a6cab3c5bb1938957205.scope.
Nov 25 09:51:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2afc5f23244f776220b7357a4c85a26f75cd631672c1f21fcfb8f1ff18ea9aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2afc5f23244f776220b7357a4c85a26f75cd631672c1f21fcfb8f1ff18ea9aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2afc5f23244f776220b7357a4c85a26f75cd631672c1f21fcfb8f1ff18ea9aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2afc5f23244f776220b7357a4c85a26f75cd631672c1f21fcfb8f1ff18ea9aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:42 compute-0 podman[257843]: 2025-11-25 09:51:42.759592371 +0000 UTC m=+0.089525099 container init db885355bedc5a0ebfb64f31cf38e27a6523b461b5e4a6cab3c5bb1938957205 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:51:42 compute-0 podman[257843]: 2025-11-25 09:51:42.767183279 +0000 UTC m=+0.097116006 container start db885355bedc5a0ebfb64f31cf38e27a6523b461b5e4a6cab3c5bb1938957205 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:51:42 compute-0 podman[257843]: 2025-11-25 09:51:42.769955193 +0000 UTC m=+0.099887931 container attach db885355bedc5a0ebfb64f31cf38e27a6523b461b5e4a6cab3c5bb1938957205 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 09:51:42 compute-0 podman[257843]: 2025-11-25 09:51:42.687322106 +0000 UTC m=+0.017254843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:51:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:42 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:42 compute-0 sharp_gould[257856]: {
Nov 25 09:51:42 compute-0 sharp_gould[257856]:     "1": [
Nov 25 09:51:42 compute-0 sharp_gould[257856]:         {
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             "devices": [
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "/dev/loop3"
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             ],
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             "lv_name": "ceph_lv0",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             "lv_size": "21470642176",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             "name": "ceph_lv0",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             "tags": {
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.cluster_name": "ceph",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.crush_device_class": "",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.encrypted": "0",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.osd_id": "1",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.type": "block",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.vdo": "0",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:                 "ceph.with_tpm": "0"
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             },
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             "type": "block",
Nov 25 09:51:42 compute-0 sharp_gould[257856]:             "vg_name": "ceph_vg0"
Nov 25 09:51:42 compute-0 sharp_gould[257856]:         }
Nov 25 09:51:42 compute-0 sharp_gould[257856]:     ]
Nov 25 09:51:42 compute-0 sharp_gould[257856]: }
Nov 25 09:51:43 compute-0 systemd[1]: libpod-db885355bedc5a0ebfb64f31cf38e27a6523b461b5e4a6cab3c5bb1938957205.scope: Deactivated successfully.
Nov 25 09:51:43 compute-0 conmon[257856]: conmon db885355bedc5a0ebfb6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db885355bedc5a0ebfb64f31cf38e27a6523b461b5e4a6cab3c5bb1938957205.scope/container/memory.events
Nov 25 09:51:43 compute-0 podman[257843]: 2025-11-25 09:51:43.009108382 +0000 UTC m=+0.339041109 container died db885355bedc5a0ebfb64f31cf38e27a6523b461b5e4a6cab3c5bb1938957205 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2afc5f23244f776220b7357a4c85a26f75cd631672c1f21fcfb8f1ff18ea9aa-merged.mount: Deactivated successfully.
Nov 25 09:51:43 compute-0 podman[257843]: 2025-11-25 09:51:43.034527923 +0000 UTC m=+0.364460649 container remove db885355bedc5a0ebfb64f31cf38e27a6523b461b5e4a6cab3c5bb1938957205 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_gould, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:51:43 compute-0 systemd[1]: libpod-conmon-db885355bedc5a0ebfb64f31cf38e27a6523b461b5e4a6cab3c5bb1938957205.scope: Deactivated successfully.
Nov 25 09:51:43 compute-0 sudo[257749]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:43 compute-0 sudo[257874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:51:43 compute-0 sudo[257874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:43 compute-0 sudo[257874]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:43 compute-0 sudo[257899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:51:43 compute-0 sudo[257899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:43 compute-0 podman[257955]: 2025-11-25 09:51:43.523291298 +0000 UTC m=+0.030680318 container create 8cabf5a40aee6ed3fbdf432f21adc431842bc94d4c85b1ac475de35a32df4181 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_leavitt, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 25 09:51:43 compute-0 systemd[1]: Started libpod-conmon-8cabf5a40aee6ed3fbdf432f21adc431842bc94d4c85b1ac475de35a32df4181.scope.
Nov 25 09:51:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:43 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:51:43 compute-0 podman[257955]: 2025-11-25 09:51:43.577513567 +0000 UTC m=+0.084902596 container init 8cabf5a40aee6ed3fbdf432f21adc431842bc94d4c85b1ac475de35a32df4181 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:51:43 compute-0 podman[257955]: 2025-11-25 09:51:43.581970788 +0000 UTC m=+0.089359806 container start 8cabf5a40aee6ed3fbdf432f21adc431842bc94d4c85b1ac475de35a32df4181 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 09:51:43 compute-0 podman[257955]: 2025-11-25 09:51:43.583146143 +0000 UTC m=+0.090535163 container attach 8cabf5a40aee6ed3fbdf432f21adc431842bc94d4c85b1ac475de35a32df4181 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:51:43 compute-0 wizardly_leavitt[257969]: 167 167
Nov 25 09:51:43 compute-0 systemd[1]: libpod-8cabf5a40aee6ed3fbdf432f21adc431842bc94d4c85b1ac475de35a32df4181.scope: Deactivated successfully.
Nov 25 09:51:43 compute-0 podman[257955]: 2025-11-25 09:51:43.585742096 +0000 UTC m=+0.093131115 container died 8cabf5a40aee6ed3fbdf432f21adc431842bc94d4c85b1ac475de35a32df4181 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_leavitt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 25 09:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e328b14e0b0be8d4e0630315ac17ffadf78bb0c26133ce3a6bdc11fb8d6462b3-merged.mount: Deactivated successfully.
Nov 25 09:51:43 compute-0 podman[257955]: 2025-11-25 09:51:43.604274667 +0000 UTC m=+0.111663685 container remove 8cabf5a40aee6ed3fbdf432f21adc431842bc94d4c85b1ac475de35a32df4181 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:51:43 compute-0 podman[257955]: 2025-11-25 09:51:43.511530061 +0000 UTC m=+0.018919090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:51:43 compute-0 systemd[1]: libpod-conmon-8cabf5a40aee6ed3fbdf432f21adc431842bc94d4c85b1ac475de35a32df4181.scope: Deactivated successfully.
Nov 25 09:51:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:43.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:43 compute-0 podman[257992]: 2025-11-25 09:51:43.735845142 +0000 UTC m=+0.031155544 container create 99faee37ea64af68948ba98010b68f21d36241ae895670da42d8c0c54053ac99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jang, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:51:43 compute-0 systemd[1]: Started libpod-conmon-99faee37ea64af68948ba98010b68f21d36241ae895670da42d8c0c54053ac99.scope.
Nov 25 09:51:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fdf2e35d1d30018c540a49d31e41600bdafaffc0b4f853e4965db8d567fae0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fdf2e35d1d30018c540a49d31e41600bdafaffc0b4f853e4965db8d567fae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fdf2e35d1d30018c540a49d31e41600bdafaffc0b4f853e4965db8d567fae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fdf2e35d1d30018c540a49d31e41600bdafaffc0b4f853e4965db8d567fae0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:51:43 compute-0 podman[257992]: 2025-11-25 09:51:43.807638789 +0000 UTC m=+0.102949191 container init 99faee37ea64af68948ba98010b68f21d36241ae895670da42d8c0c54053ac99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jang, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:51:43 compute-0 podman[257992]: 2025-11-25 09:51:43.812829923 +0000 UTC m=+0.108140315 container start 99faee37ea64af68948ba98010b68f21d36241ae895670da42d8c0c54053ac99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jang, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:51:43 compute-0 podman[257992]: 2025-11-25 09:51:43.814002223 +0000 UTC m=+0.109312615 container attach 99faee37ea64af68948ba98010b68f21d36241ae895670da42d8c0c54053ac99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:51:43 compute-0 podman[257992]: 2025-11-25 09:51:43.72361506 +0000 UTC m=+0.018925472 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:51:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:44.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:44 compute-0 infallible_jang[258005]: {}
Nov 25 09:51:44 compute-0 lvm[258089]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:51:44 compute-0 lvm[258089]: VG ceph_vg0 finished
Nov 25 09:51:44 compute-0 systemd[1]: libpod-99faee37ea64af68948ba98010b68f21d36241ae895670da42d8c0c54053ac99.scope: Deactivated successfully.
Nov 25 09:51:44 compute-0 podman[257992]: 2025-11-25 09:51:44.331319397 +0000 UTC m=+0.626629789 container died 99faee37ea64af68948ba98010b68f21d36241ae895670da42d8c0c54053ac99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jang, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:51:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-98fdf2e35d1d30018c540a49d31e41600bdafaffc0b4f853e4965db8d567fae0-merged.mount: Deactivated successfully.
Nov 25 09:51:44 compute-0 podman[257992]: 2025-11-25 09:51:44.359030097 +0000 UTC m=+0.654340479 container remove 99faee37ea64af68948ba98010b68f21d36241ae895670da42d8c0c54053ac99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jang, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:51:44 compute-0 systemd[1]: libpod-conmon-99faee37ea64af68948ba98010b68f21d36241ae895670da42d8c0c54053ac99.scope: Deactivated successfully.
Nov 25 09:51:44 compute-0 podman[258080]: 2025-11-25 09:51:44.383914389 +0000 UTC m=+0.092609022 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 09:51:44 compute-0 sudo[257899]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:51:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:51:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:51:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:51:44 compute-0 sudo[258110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:51:44 compute-0 sudo[258110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:51:44 compute-0 sudo[258110]: pam_unix(sudo:session): session closed for user root
Nov 25 09:51:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:44 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75440056c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:44 compute-0 ceph-mon[74207]: pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:51:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:51:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:51:44
Nov 25 09:51:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:51:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:51:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['.rgw.root', '.nfs', 'default.rgw.log', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.mgr', 'volumes']
Nov 25 09:51:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:51:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:51:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:51:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:44 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7538005870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:51:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:51:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:51:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:51:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:51:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:51:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:51:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:51:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:51:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:51:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:51:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:51:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:51:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:51:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:51:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:51:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:45 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7520009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:45.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:51:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:46.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=404 latency=0.001000009s ======
Nov 25 09:51:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:46.106 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.001000009s
Nov 25 09:51:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - - [25/Nov/2025:09:51:46.117 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000010s
Nov 25 09:51:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:46 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:46 compute-0 ceph-mon[74207]: pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.748240) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306748286, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 681, "num_deletes": 251, "total_data_size": 937980, "memory_usage": 950392, "flush_reason": "Manual Compaction"}
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306752804, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 928493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19290, "largest_seqno": 19970, "table_properties": {"data_size": 924953, "index_size": 1384, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8273, "raw_average_key_size": 19, "raw_value_size": 917791, "raw_average_value_size": 2159, "num_data_blocks": 61, "num_entries": 425, "num_filter_entries": 425, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064258, "oldest_key_time": 1764064258, "file_creation_time": 1764064306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4585 microseconds, and 3424 cpu microseconds.
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.752837) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 928493 bytes OK
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.752848) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.753366) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.753377) EVENT_LOG_v1 {"time_micros": 1764064306753374, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.753389) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 934462, prev total WAL file size 934462, number of live WAL files 2.
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.753912) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(906KB)], [41(13MB)]
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306753941, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 14679885, "oldest_snapshot_seqno": -1}
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4994 keys, 12485637 bytes, temperature: kUnknown
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306782343, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 12485637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12451597, "index_size": 20426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 127202, "raw_average_key_size": 25, "raw_value_size": 12360373, "raw_average_value_size": 2475, "num_data_blocks": 840, "num_entries": 4994, "num_filter_entries": 4994, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764064306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.782477) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12485637 bytes
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.782810) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 516.1 rd, 439.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.1 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(29.3) write-amplify(13.4) OK, records in: 5510, records dropped: 516 output_compression: NoCompression
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.782824) EVENT_LOG_v1 {"time_micros": 1764064306782818, "job": 20, "event": "compaction_finished", "compaction_time_micros": 28444, "compaction_time_cpu_micros": 18752, "output_level": 6, "num_output_files": 1, "total_output_size": 12485637, "num_input_records": 5510, "num_output_records": 4994, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306783026, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306784412, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.753831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.784489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.784493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.784494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.784495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:51:46 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:51:46.784496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:51:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:46 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7544005840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:47.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:47.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:47.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:47.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0[79443]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 25 09:51:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:51:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:47 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7538005890 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:47.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:48.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:48 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752000ae20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:48 compute-0 ceph-mon[74207]: pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:51:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:48 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:49 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f754c002440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:49.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 25 09:51:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 25 09:51:49 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 25 09:51:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.002000020s ======
Nov 25 09:51:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:50.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Nov 25 09:51:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:51:50] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:51:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:51:50] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 25 09:51:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:50 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7544006160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:50 compute-0 ceph-mon[74207]: pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:51:50 compute-0 ceph-mon[74207]: osdmap e137: 3 total, 3 up, 3 in
Nov 25 09:51:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 25 09:51:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 25 09:51:50 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 25 09:51:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:50 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752000ae20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Nov 25 09:51:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:51 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:51.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 25 09:51:51 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 25 09:51:51 compute-0 ceph-mon[74207]: osdmap e138: 3 total, 3 up, 3 in
Nov 25 09:51:51 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 25 09:51:52 compute-0 podman[258144]: 2025-11-25 09:51:52.000475093 +0000 UTC m=+0.057492223 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 09:51:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:52.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:52 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 25 09:51:52 compute-0 ceph-mon[74207]: pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Nov 25 09:51:52 compute-0 ceph-mon[74207]: osdmap e139: 3 total, 3 up, 3 in
Nov 25 09:51:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 25 09:51:52 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 25 09:51:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 25 09:51:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 25 09:51:52 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 25 09:51:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:52 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 3.0 KiB/s wr, 26 op/s
Nov 25 09:51:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:53 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752000ae20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:53.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:53 compute-0 ceph-mon[74207]: osdmap e140: 3 total, 3 up, 3 in
Nov 25 09:51:53 compute-0 ceph-mon[74207]: osdmap e141: 3 total, 3 up, 3 in
Nov 25 09:51:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:54.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095154 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:51:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:54 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752000ae20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:54 compute-0 ceph-mon[74207]: pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 3.0 KiB/s wr, 26 op/s
Nov 25 09:51:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3337901668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:51:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3337901668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:51:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:54 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7514004ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:51:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.4 KiB/s wr, 20 op/s
Nov 25 09:51:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:55 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7544006ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:55.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:55 compute-0 podman[258172]: 2025-11-25 09:51:55.97266867 +0000 UTC m=+0.038372846 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd)
Nov 25 09:51:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:56.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:56 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752000ae20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:56 compute-0 ceph-mon[74207]: pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.4 KiB/s wr, 20 op/s
Nov 25 09:51:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:56 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752000ae20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:57.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:57.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:57.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:51:57.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:51:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v619: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 6.8 MiB/s wr, 47 op/s
Nov 25 09:51:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:57 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752000ae20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:57.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:51:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:51:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:58.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:51:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:58 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7550000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:58 compute-0 ceph-mon[74207]: pgmap v619: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 6.8 MiB/s wr, 47 op/s
Nov 25 09:51:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:58 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752000ae20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:51:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v620: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.4 MiB/s wr, 37 op/s
Nov 25 09:51:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[256320]: 25/11/2025 09:51:59 : epoch 69257bcb : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f752000ae20 fd 39 proxy ignored for local
Nov 25 09:51:59 compute-0 kernel: ganesha.nfsd[258190]: segfault at 50 ip 00007f75ce3fa32e sp 00007f7592ffc210 error 4 in libntirpc.so.5.8[7f75ce3df000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 25 09:51:59 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:51:59 compute-0 systemd[1]: Started Process Core Dump (PID 258193/UID 0).
Nov 25 09:51:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:51:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:51:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:59.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:51:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:51:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:52:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:00.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:52:00] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Nov 25 09:52:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:52:00] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Nov 25 09:52:00 compute-0 systemd-coredump[258194]: Process 256324 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 62:
                                                    #0  0x00007f75ce3fa32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:52:00 compute-0 systemd[1]: systemd-coredump@9-258193-0.service: Deactivated successfully.
Nov 25 09:52:00 compute-0 systemd[1]: systemd-coredump@9-258193-0.service: Consumed 1.007s CPU time.
Nov 25 09:52:00 compute-0 podman[258201]: 2025-11-25 09:52:00.687128208 +0000 UTC m=+0.020986588 container died 944dada5a2a8753b215fe568c16e778485e03736646e5e07b9a04a882e698a61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-30a2f059bdad4f1d59fe2188cfb8b0a2d1330df29e61c1bb0b3adf8f6b1eac3d-merged.mount: Deactivated successfully.
Nov 25 09:52:00 compute-0 podman[258201]: 2025-11-25 09:52:00.707570867 +0000 UTC m=+0.041429237 container remove 944dada5a2a8753b215fe568c16e778485e03736646e5e07b9a04a882e698a61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:52:00 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:52:00 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:52:00 compute-0 ceph-mon[74207]: pgmap v620: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.4 MiB/s wr, 37 op/s
Nov 25 09:52:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:52:00 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Consumed 1.055s CPU time.
Nov 25 09:52:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v621: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.7 MiB/s wr, 33 op/s
Nov 25 09:52:01 compute-0 sudo[258235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:52:01 compute-0 sudo[258235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:01 compute-0 sudo[258235]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.002000020s ======
Nov 25 09:52:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:01.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Nov 25 09:52:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:02.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:02 compute-0 ceph-mon[74207]: pgmap v621: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.7 MiB/s wr, 33 op/s
Nov 25 09:52:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v622: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.9 MiB/s wr, 27 op/s
Nov 25 09:52:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:04.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:04 compute-0 ceph-mon[74207]: pgmap v622: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.9 MiB/s wr, 27 op/s
Nov 25 09:52:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:05.380 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:05.381 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:05.381 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v623: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.4 MiB/s wr, 24 op/s
Nov 25 09:52:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095205 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:52:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:05.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:06.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:06 compute-0 ceph-mon[74207]: pgmap v623: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.4 MiB/s wr, 24 op/s
Nov 25 09:52:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:07.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:07.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:07.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:07.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v624: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.4 MiB/s wr, 25 op/s
Nov 25 09:52:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:07.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:07 compute-0 ceph-mon[74207]: pgmap v624: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.4 MiB/s wr, 25 op/s
Nov 25 09:52:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:08.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:08.498 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:52:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:08.499 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:52:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v625: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 25 09:52:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:09.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:10.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:52:10] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Nov 25 09:52:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:52:10] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Nov 25 09:52:10 compute-0 ceph-mon[74207]: pgmap v625: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 25 09:52:10 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 10.
Nov 25 09:52:10 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:52:10 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Consumed 1.055s CPU time.
Nov 25 09:52:10 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:52:11 compute-0 podman[258309]: 2025-11-25 09:52:11.079803787 +0000 UTC m=+0.029489964 container create 791bbab5845f78c874800a3bbdef657807d13c08bd378e252e2c6bda4d70e108 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9298a2fcc9e2df78612b3903844701153db4ec3da0115abb4947af9b7b847515/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9298a2fcc9e2df78612b3903844701153db4ec3da0115abb4947af9b7b847515/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9298a2fcc9e2df78612b3903844701153db4ec3da0115abb4947af9b7b847515/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9298a2fcc9e2df78612b3903844701153db4ec3da0115abb4947af9b7b847515/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:11 compute-0 podman[258309]: 2025-11-25 09:52:11.132330731 +0000 UTC m=+0.082016908 container init 791bbab5845f78c874800a3bbdef657807d13c08bd378e252e2c6bda4d70e108 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:52:11 compute-0 podman[258309]: 2025-11-25 09:52:11.135947498 +0000 UTC m=+0.085633674 container start 791bbab5845f78c874800a3bbdef657807d13c08bd378e252e2c6bda4d70e108 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:52:11 compute-0 bash[258309]: 791bbab5845f78c874800a3bbdef657807d13c08bd378e252e2c6bda4d70e108
Nov 25 09:52:11 compute-0 podman[258309]: 2025-11-25 09:52:11.067375943 +0000 UTC m=+0.017062140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:52:11 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:52:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:11 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:52:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:11 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:52:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:11 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:52:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:11 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:52:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:11 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:52:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:11 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:52:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:11 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:52:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:11 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:52:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v626: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:52:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:11.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:12.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:12 compute-0 ceph-mon[74207]: pgmap v626: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:52:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v627: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Nov 25 09:52:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:13.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:14.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:14 compute-0 rsyslogd[961]: imjournal from <np0005534694:radosgw>: begin to drop messages due to rate-limiting
Nov 25 09:52:14 compute-0 ceph-mon[74207]: pgmap v627: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Nov 25 09:52:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:52:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:52:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:52:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:52:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:52:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:52:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:52:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:52:14 compute-0 podman[258367]: 2025-11-25 09:52:14.98860524 +0000 UTC m=+0.045913579 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 09:52:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v628: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Nov 25 09:52:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:52:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:15.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:16.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:16 compute-0 ceph-mon[74207]: pgmap v628: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Nov 25 09:52:16 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:16.500 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:17.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:17.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:17.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:17.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:52:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 09:52:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v629: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:52:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2106931284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:17.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:18.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:18 compute-0 nova_compute[253512]: 2025-11-25 09:52:18.482 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:18 compute-0 nova_compute[253512]: 2025-11-25 09:52:18.482 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:18 compute-0 nova_compute[253512]: 2025-11-25 09:52:18.501 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:18 compute-0 nova_compute[253512]: 2025-11-25 09:52:18.501 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:18 compute-0 nova_compute[253512]: 2025-11-25 09:52:18.501 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:18 compute-0 ceph-mon[74207]: pgmap v629: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 25 09:52:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1419400791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2717398405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3747932840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095219 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:52:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [ALERT] 328/095219 (4) : backend 'backend' has no server available!
Nov 25 09:52:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v630: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Nov 25 09:52:19 compute-0 nova_compute[253512]: 2025-11-25 09:52:19.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:19 compute-0 nova_compute[253512]: 2025-11-25 09:52:19.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:19 compute-0 nova_compute[253512]: 2025-11-25 09:52:19.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:19 compute-0 nova_compute[253512]: 2025-11-25 09:52:19.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:52:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:19.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:20.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:52:20] "GET /metrics HTTP/1.1" 200 48389 "" "Prometheus/2.51.0"
Nov 25 09:52:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:52:20] "GET /metrics HTTP/1.1" 200 48389 "" "Prometheus/2.51.0"
Nov 25 09:52:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095220 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:52:20 compute-0 nova_compute[253512]: 2025-11-25 09:52:20.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:20 compute-0 nova_compute[253512]: 2025-11-25 09:52:20.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:52:20 compute-0 nova_compute[253512]: 2025-11-25 09:52:20.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:52:20 compute-0 nova_compute[253512]: 2025-11-25 09:52:20.487 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:52:20 compute-0 nova_compute[253512]: 2025-11-25 09:52:20.487 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:52:20 compute-0 nova_compute[253512]: 2025-11-25 09:52:20.501 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:20 compute-0 nova_compute[253512]: 2025-11-25 09:52:20.501 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:20 compute-0 nova_compute[253512]: 2025-11-25 09:52:20.501 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:20 compute-0 nova_compute[253512]: 2025-11-25 09:52:20.501 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:52:20 compute-0 nova_compute[253512]: 2025-11-25 09:52:20.501 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:20 compute-0 ceph-mon[74207]: pgmap v630: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Nov 25 09:52:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:52:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4107048521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:20 compute-0 nova_compute[253512]: 2025-11-25 09:52:20.826 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:21 compute-0 nova_compute[253512]: 2025-11-25 09:52:21.021 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:52:21 compute-0 nova_compute[253512]: 2025-11-25 09:52:21.022 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4957MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:52:21 compute-0 nova_compute[253512]: 2025-11-25 09:52:21.022 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:21 compute-0 nova_compute[253512]: 2025-11-25 09:52:21.022 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:21 compute-0 nova_compute[253512]: 2025-11-25 09:52:21.072 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:52:21 compute-0 nova_compute[253512]: 2025-11-25 09:52:21.072 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:52:21 compute-0 nova_compute[253512]: 2025-11-25 09:52:21.087 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:52:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3764020354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v631: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:52:21 compute-0 nova_compute[253512]: 2025-11-25 09:52:21.424 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:21 compute-0 nova_compute[253512]: 2025-11-25 09:52:21.427 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:52:21 compute-0 nova_compute[253512]: 2025-11-25 09:52:21.439 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:52:21 compute-0 nova_compute[253512]: 2025-11-25 09:52:21.441 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:52:21 compute-0 nova_compute[253512]: 2025-11-25 09:52:21.441 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:21 compute-0 sudo[258433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:52:21 compute-0 sudo[258433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:21 compute-0 sudo[258433]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4107048521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3764020354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:21.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:22 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:52:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:22 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:52:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:22 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:52:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:22 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 09:52:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:22.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:22 compute-0 ceph-mon[74207]: pgmap v631: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:52:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:22 compute-0 podman[258460]: 2025-11-25 09:52:22.992571009 +0000 UTC m=+0.058372451 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 09:52:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v632: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:52:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:23.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:24.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:24 compute-0 ceph-mon[74207]: pgmap v632: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:52:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v633: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:52:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:25 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:52:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:25 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:52:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:25 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:52:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:52:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:25.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:52:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:26.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:26 compute-0 ceph-mon[74207]: pgmap v633: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Nov 25 09:52:26 compute-0 nova_compute[253512]: 2025-11-25 09:52:26.811 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:26 compute-0 nova_compute[253512]: 2025-11-25 09:52:26.811 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:26 compute-0 nova_compute[253512]: 2025-11-25 09:52:26.827 253516 DEBUG nova.compute.manager [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:52:26 compute-0 nova_compute[253512]: 2025-11-25 09:52:26.910 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:26 compute-0 nova_compute[253512]: 2025-11-25 09:52:26.911 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:26 compute-0 nova_compute[253512]: 2025-11-25 09:52:26.915 253516 DEBUG nova.virt.hardware [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:52:26 compute-0 nova_compute[253512]: 2025-11-25 09:52:26.915 253516 INFO nova.compute.claims [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:52:26 compute-0 podman[258487]: 2025-11-25 09:52:26.978554405 +0000 UTC m=+0.042289258 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, managed_by=edpm_ansible)
Nov 25 09:52:26 compute-0 nova_compute[253512]: 2025-11-25 09:52:26.999 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:27.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:27.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:27.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:27.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:52:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3604582290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.333 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.336 253516 DEBUG nova.compute.provider_tree [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.348 253516 DEBUG nova.scheduler.client.report [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.362 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.363 253516 DEBUG nova.compute.manager [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.392 253516 DEBUG nova.compute.manager [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.393 253516 DEBUG nova.network.neutron [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.408 253516 INFO nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.421 253516 DEBUG nova.compute.manager [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:52:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v634: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.476 253516 DEBUG nova.compute.manager [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.477 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.477 253516 INFO nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Creating image(s)
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.496 253516 DEBUG nova.storage.rbd_utils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.513 253516 DEBUG nova.storage.rbd_utils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.528 253516 DEBUG nova.storage.rbd_utils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.529 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.530 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3604582290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:52:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:27.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:27 compute-0 nova_compute[253512]: 2025-11-25 09:52:27.889 253516 DEBUG nova.virt.libvirt.imagebackend [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image locations are: [{'url': 'rbd://af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/images/62ddd1b7-1bba-493e-a10f-b03a12ab3457/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/images/62ddd1b7-1bba-493e-a10f-b03a12ab3457/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 09:52:28 compute-0 nova_compute[253512]: 2025-11-25 09:52:28.069 253516 WARNING oslo_policy.policy [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 25 09:52:28 compute-0 nova_compute[253512]: 2025-11-25 09:52:28.070 253516 WARNING oslo_policy.policy [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 25 09:52:28 compute-0 nova_compute[253512]: 2025-11-25 09:52:28.072 253516 DEBUG nova.policy [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:52:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:28.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:28 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 09:52:28 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 8934 writes, 34K keys, 8934 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 8934 writes, 2144 syncs, 4.17 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 899 writes, 1569 keys, 899 commit groups, 1.0 writes per commit group, ingest: 0.68 MB, 0.00 MB/s
                                           Interval WAL: 899 writes, 431 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 09:52:28 compute-0 ceph-mon[74207]: pgmap v634: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 25 09:52:29 compute-0 nova_compute[253512]: 2025-11-25 09:52:29.018 253516 DEBUG nova.network.neutron [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Successfully created port: 28155732-58f2-49db-83c4-a44433c25b29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:52:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v635: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 511 B/s wr, 2 op/s
Nov 25 09:52:29 compute-0 nova_compute[253512]: 2025-11-25 09:52:29.509 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:29 compute-0 nova_compute[253512]: 2025-11-25 09:52:29.557 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.part --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:29 compute-0 nova_compute[253512]: 2025-11-25 09:52:29.558 253516 DEBUG nova.virt.images [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] 62ddd1b7-1bba-493e-a10f-b03a12ab3457 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 25 09:52:29 compute-0 nova_compute[253512]: 2025-11-25 09:52:29.559 253516 DEBUG nova.privsep.utils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 09:52:29 compute-0 nova_compute[253512]: 2025-11-25 09:52:29.559 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.part /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:29 compute-0 nova_compute[253512]: 2025-11-25 09:52:29.617 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.part /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.converted" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:29 compute-0 nova_compute[253512]: 2025-11-25 09:52:29.620 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:29 compute-0 nova_compute[253512]: 2025-11-25 09:52:29.666 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.converted --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:29 compute-0 nova_compute[253512]: 2025-11-25 09:52:29.667 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:29 compute-0 nova_compute[253512]: 2025-11-25 09:52:29.686 253516 DEBUG nova.storage.rbd_utils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:52:29 compute-0 nova_compute[253512]: 2025-11-25 09:52:29.689 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:29.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:52:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:52:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:30.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:52:30] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 25 09:52:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:52:30] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 25 09:52:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 25 09:52:30 compute-0 ceph-mon[74207]: pgmap v635: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 511 B/s wr, 2 op/s
Nov 25 09:52:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:52:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 25 09:52:30 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.147 253516 DEBUG nova.network.neutron [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Successfully updated port: 28155732-58f2-49db-83c4-a44433c25b29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.166 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.166 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.166 253516 DEBUG nova.network.neutron [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.338 253516 DEBUG nova.network.neutron [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:52:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v637: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 818 B/s wr, 10 op/s
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000001a:nfs.cephfs.2: -2
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:52:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 25 09:52:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 25 09:52:31 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 25 09:52:31 compute-0 ceph-mon[74207]: osdmap e142: 3 total, 3 up, 3 in
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:52:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.591 253516 DEBUG nova.compute.manager [req-70840676-6299-4125-8a94-d4ee31358289 req-ae626d3f-f4f7-4891-80de-72b19014a219 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Received event network-changed-28155732-58f2-49db-83c4-a44433c25b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.591 253516 DEBUG nova.compute.manager [req-70840676-6299-4125-8a94-d4ee31358289 req-ae626d3f-f4f7-4891-80de-72b19014a219 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Refreshing instance network info cache due to event network-changed-28155732-58f2-49db-83c4-a44433c25b29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.592 253516 DEBUG oslo_concurrency.lockutils [req-70840676-6299-4125-8a94-d4ee31358289 req-ae626d3f-f4f7-4891-80de-72b19014a219 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.678 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.990s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.727 253516 DEBUG nova.storage.rbd_utils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image 20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:52:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:31.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.781 253516 DEBUG nova.objects.instance [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid 20ce4aa3-c077-4515-86c2-9c414a3cdd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.791 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.792 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Ensure instance console log exists: /var/lib/nova/instances/20ce4aa3-c077-4515-86c2-9c414a3cdd3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.792 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.792 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:31 compute-0 nova_compute[253512]: 2025-11-25 09:52:31.792 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:32.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.403 253516 DEBUG nova.network.neutron [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Updating instance_info_cache with network_info: [{"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.417 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.417 253516 DEBUG nova.compute.manager [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Instance network_info: |[{"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.418 253516 DEBUG oslo_concurrency.lockutils [req-70840676-6299-4125-8a94-d4ee31358289 req-ae626d3f-f4f7-4891-80de-72b19014a219 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.418 253516 DEBUG nova.network.neutron [req-70840676-6299-4125-8a94-d4ee31358289 req-ae626d3f-f4f7-4891-80de-72b19014a219 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Refreshing network info cache for port 28155732-58f2-49db-83c4-a44433c25b29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.420 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Start _get_guest_xml network_info=[{"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'encrypted': False, 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.424 253516 WARNING nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.428 253516 DEBUG nova.virt.libvirt.host [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.428 253516 DEBUG nova.virt.libvirt.host [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.430 253516 DEBUG nova.virt.libvirt.host [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.431 253516 DEBUG nova.virt.libvirt.host [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.431 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.431 253516 DEBUG nova.virt.hardware [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.432 253516 DEBUG nova.virt.hardware [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.432 253516 DEBUG nova.virt.hardware [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.432 253516 DEBUG nova.virt.hardware [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.432 253516 DEBUG nova.virt.hardware [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.432 253516 DEBUG nova.virt.hardware [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.432 253516 DEBUG nova.virt.hardware [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.432 253516 DEBUG nova.virt.hardware [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.433 253516 DEBUG nova.virt.hardware [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.433 253516 DEBUG nova.virt.hardware [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.433 253516 DEBUG nova.virt.hardware [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.436 253516 DEBUG nova.privsep.utils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.436 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:32 compute-0 ceph-mon[74207]: pgmap v637: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 818 B/s wr, 10 op/s
Nov 25 09:52:32 compute-0 ceph-mon[74207]: osdmap e143: 3 total, 3 up, 3 in
Nov 25 09:52:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:32 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:52:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1340971977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.772 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.792 253516 DEBUG nova.storage.rbd_utils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:52:32 compute-0 nova_compute[253512]: 2025-11-25 09:52:32.794 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:33 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001d70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:52:33 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3791333604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.134 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.136 253516 DEBUG nova.virt.libvirt.vif [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-453829189',display_name='tempest-TestNetworkBasicOps-server-453829189',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-453829189',id=1,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEfsH4ZhDHRBQ0SXsq2ksZ7miMJturUxNye3bMuHr58eUD8ojFQTCwUl3zvYWeVggLe5N5I44aJcrzxLhc25YSfwBBYUQcIRfap8L0aaDZjPbskXYYx1DYkjthNp2iz2sw==',key_name='tempest-TestNetworkBasicOps-1065734731',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-v94dznfx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:52:27Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=20ce4aa3-c077-4515-86c2-9c414a3cdd3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.136 253516 DEBUG nova.network.os_vif_util [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.137 253516 DEBUG nova.network.os_vif_util [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:fd:4d,bridge_name='br-int',has_traffic_filtering=True,id=28155732-58f2-49db-83c4-a44433c25b29,network=Network(c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28155732-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.138 253516 DEBUG nova.objects.instance [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid 20ce4aa3-c077-4515-86c2-9c414a3cdd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.150 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:52:33 compute-0 nova_compute[253512]:   <uuid>20ce4aa3-c077-4515-86c2-9c414a3cdd3e</uuid>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   <name>instance-00000001</name>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   <memory>131072</memory>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   <vcpu>1</vcpu>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   <metadata>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <nova:name>tempest-TestNetworkBasicOps-server-453829189</nova:name>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <nova:creationTime>2025-11-25 09:52:32</nova:creationTime>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <nova:flavor name="m1.nano">
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <nova:memory>128</nova:memory>
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <nova:disk>1</nova:disk>
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <nova:swap>0</nova:swap>
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       </nova:flavor>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <nova:owner>
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       </nova:owner>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <nova:ports>
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <nova:port uuid="28155732-58f2-49db-83c4-a44433c25b29">
Nov 25 09:52:33 compute-0 nova_compute[253512]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:         </nova:port>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       </nova:ports>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     </nova:instance>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   </metadata>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   <sysinfo type="smbios">
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <system>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <entry name="serial">20ce4aa3-c077-4515-86c2-9c414a3cdd3e</entry>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <entry name="uuid">20ce4aa3-c077-4515-86c2-9c414a3cdd3e</entry>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     </system>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   </sysinfo>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   <os>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <boot dev="hd"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <smbios mode="sysinfo"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   </os>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   <features>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <acpi/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <apic/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <vmcoreinfo/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   </features>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   <clock offset="utc">
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <timer name="hpet" present="no"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   </clock>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   <cpu mode="host-model" match="exact">
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   </cpu>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   <devices>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <disk type="network" device="disk">
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <driver type="raw" cache="none"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <source protocol="rbd" name="vms/20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk">
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       </source>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <auth username="openstack">
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <target dev="vda" bus="virtio"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <disk type="network" device="cdrom">
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <driver type="raw" cache="none"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <source protocol="rbd" name="vms/20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk.config">
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       </source>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <auth username="openstack">
Nov 25 09:52:33 compute-0 nova_compute[253512]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <target dev="sda" bus="sata"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <interface type="ethernet">
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <mac address="fa:16:3e:99:fd:4d"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <model type="virtio"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <mtu size="1442"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <target dev="tap28155732-58"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     </interface>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <serial type="pty">
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <log file="/var/lib/nova/instances/20ce4aa3-c077-4515-86c2-9c414a3cdd3e/console.log" append="off"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     </serial>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <video>
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <model type="virtio"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     </video>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <input type="tablet" bus="usb"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <rng model="virtio">
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     </rng>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <controller type="usb" index="0"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     <memballoon model="virtio">
Nov 25 09:52:33 compute-0 nova_compute[253512]:       <stats period="10"/>
Nov 25 09:52:33 compute-0 nova_compute[253512]:     </memballoon>
Nov 25 09:52:33 compute-0 nova_compute[253512]:   </devices>
Nov 25 09:52:33 compute-0 nova_compute[253512]: </domain>
Nov 25 09:52:33 compute-0 nova_compute[253512]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.150 253516 DEBUG nova.compute.manager [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Preparing to wait for external event network-vif-plugged-28155732-58f2-49db-83c4-a44433c25b29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.150 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.150 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.151 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.151 253516 DEBUG nova.virt.libvirt.vif [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-453829189',display_name='tempest-TestNetworkBasicOps-server-453829189',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-453829189',id=1,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEfsH4ZhDHRBQ0SXsq2ksZ7miMJturUxNye3bMuHr58eUD8ojFQTCwUl3zvYWeVggLe5N5I44aJcrzxLhc25YSfwBBYUQcIRfap8L0aaDZjPbskXYYx1DYkjthNp2iz2sw==',key_name='tempest-TestNetworkBasicOps-1065734731',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-v94dznfx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:52:27Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=20ce4aa3-c077-4515-86c2-9c414a3cdd3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.151 253516 DEBUG nova.network.os_vif_util [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.152 253516 DEBUG nova.network.os_vif_util [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:fd:4d,bridge_name='br-int',has_traffic_filtering=True,id=28155732-58f2-49db-83c4-a44433c25b29,network=Network(c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28155732-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.152 253516 DEBUG os_vif [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:fd:4d,bridge_name='br-int',has_traffic_filtering=True,id=28155732-58f2-49db-83c4-a44433c25b29,network=Network(c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28155732-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.178 253516 DEBUG ovsdbapp.backend.ovs_idl [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.178 253516 DEBUG ovsdbapp.backend.ovs_idl [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.179 253516 DEBUG ovsdbapp.backend.ovs_idl [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.179 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.179 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.179 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.180 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.181 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.182 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.190 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.190 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.190 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.191 253516 INFO oslo.privsep.daemon [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmps4yt5cgx/privsep.sock']
Nov 25 09:52:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v639: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1023 B/s wr, 13 op/s
Nov 25 09:52:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095233 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:52:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:33 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x5632768248a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:33 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1340971977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:52:33 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3791333604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.742 253516 INFO oslo.privsep.daemon [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Spawned new privsep daemon via rootwrap
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.661 258791 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.664 258791 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.665 258791 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.665 258791 INFO oslo.privsep.daemon [-] privsep daemon running as pid 258791
Nov 25 09:52:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:33.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.985 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.985 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28155732-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.986 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28155732-58, col_values=(('external_ids', {'iface-id': '28155732-58f2-49db-83c4-a44433c25b29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:fd:4d', 'vm-uuid': '20ce4aa3-c077-4515-86c2-9c414a3cdd3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:52:33 compute-0 NetworkManager[48903]: <info>  [1764064353.9880] manager: (tap28155732-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.987 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.990 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:52:33 compute-0 nova_compute[253512]: 2025-11-25 09:52:33.993 253516 INFO os_vif [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:fd:4d,bridge_name='br-int',has_traffic_filtering=True,id=28155732-58f2-49db-83c4-a44433c25b29,network=Network(c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28155732-58')
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.028 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.028 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.028 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:99:fd:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.028 253516 INFO nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Using config drive
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.047 253516 DEBUG nova.storage.rbd_utils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:52:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:34.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.463 253516 INFO nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Creating config drive at /var/lib/nova/instances/20ce4aa3-c077-4515-86c2-9c414a3cdd3e/disk.config
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.467 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20ce4aa3-c077-4515-86c2-9c414a3cdd3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfw091uk0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.507 253516 DEBUG nova.network.neutron [req-70840676-6299-4125-8a94-d4ee31358289 req-ae626d3f-f4f7-4891-80de-72b19014a219 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Updated VIF entry in instance network info cache for port 28155732-58f2-49db-83c4-a44433c25b29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.508 253516 DEBUG nova.network.neutron [req-70840676-6299-4125-8a94-d4ee31358289 req-ae626d3f-f4f7-4891-80de-72b19014a219 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Updating instance_info_cache with network_info: [{"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.523 253516 DEBUG oslo_concurrency.lockutils [req-70840676-6299-4125-8a94-d4ee31358289 req-ae626d3f-f4f7-4891-80de-72b19014a219 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.589 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20ce4aa3-c077-4515-86c2-9c414a3cdd3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfw091uk0" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:34 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:52:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:34 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:52:34 compute-0 ceph-mon[74207]: pgmap v639: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1023 B/s wr, 13 op/s
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.611 253516 DEBUG nova.storage.rbd_utils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.613 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20ce4aa3-c077-4515-86c2-9c414a3cdd3e/disk.config 20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.626 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:34 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff40089d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.695 253516 DEBUG oslo_concurrency.processutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20ce4aa3-c077-4515-86c2-9c414a3cdd3e/disk.config 20ce4aa3-c077-4515-86c2-9c414a3cdd3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.695 253516 INFO nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Deleting local config drive /var/lib/nova/instances/20ce4aa3-c077-4515-86c2-9c414a3cdd3e/disk.config because it was imported into RBD.
Nov 25 09:52:34 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 25 09:52:34 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 25 09:52:34 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 25 09:52:34 compute-0 kernel: tap28155732-58: entered promiscuous mode
Nov 25 09:52:34 compute-0 NetworkManager[48903]: <info>  [1764064354.7663] manager: (tap28155732-58): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Nov 25 09:52:34 compute-0 ovn_controller[155020]: 2025-11-25T09:52:34Z|00027|binding|INFO|Claiming lport 28155732-58f2-49db-83c4-a44433c25b29 for this chassis.
Nov 25 09:52:34 compute-0 ovn_controller[155020]: 2025-11-25T09:52:34Z|00028|binding|INFO|28155732-58f2-49db-83c4-a44433c25b29: Claiming fa:16:3e:99:fd:4d 10.100.0.4
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.767 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:34 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:34.778 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:fd:4d 10.100.0.4'], port_security=['fa:16:3e:99:fd:4d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '20ce4aa3-c077-4515-86c2-9c414a3cdd3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '900241cb-44e2-4a1c-b163-ffb52d77319a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fa0b426-4521-4667-8d25-bb8f0339de8c, chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=28155732-58f2-49db-83c4-a44433c25b29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:52:34 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:34.779 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 28155732-58f2-49db-83c4-a44433c25b29 in datapath c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0 bound to our chassis
Nov 25 09:52:34 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:34.780 164791 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0
Nov 25 09:52:34 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:34.781 164791 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpgnyjxqxd/privsep.sock']
Nov 25 09:52:34 compute-0 systemd-udevd[258891]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:52:34 compute-0 NetworkManager[48903]: <info>  [1764064354.8116] device (tap28155732-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:52:34 compute-0 NetworkManager[48903]: <info>  [1764064354.8124] device (tap28155732-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:52:34 compute-0 systemd-machined[216497]: New machine qemu-1-instance-00000001.
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.854 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:34 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.859 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:34 compute-0 ovn_controller[155020]: 2025-11-25T09:52:34Z|00029|binding|INFO|Setting lport 28155732-58f2-49db-83c4-a44433c25b29 ovn-installed in OVS
Nov 25 09:52:34 compute-0 ovn_controller[155020]: 2025-11-25T09:52:34Z|00030|binding|INFO|Setting lport 28155732-58f2-49db-83c4-a44433c25b29 up in Southbound
Nov 25 09:52:34 compute-0 nova_compute[253512]: 2025-11-25 09:52:34.864 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:35 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8002700 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.166 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064355.1661136, 20ce4aa3-c077-4515-86c2-9c414a3cdd3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.167 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] VM Started (Lifecycle Event)
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.194 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.197 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064355.1686883, 20ce4aa3-c077-4515-86c2-9c414a3cdd3e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.197 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] VM Paused (Lifecycle Event)
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.208 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.210 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.220 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.289 253516 DEBUG nova.compute.manager [req-3004bebf-da7f-4233-b5e0-38cae8ce6733 req-8f615daa-1a62-431f-bf76-a5071ad72e1e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Received event network-vif-plugged-28155732-58f2-49db-83c4-a44433c25b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.290 253516 DEBUG oslo_concurrency.lockutils [req-3004bebf-da7f-4233-b5e0-38cae8ce6733 req-8f615daa-1a62-431f-bf76-a5071ad72e1e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.290 253516 DEBUG oslo_concurrency.lockutils [req-3004bebf-da7f-4233-b5e0-38cae8ce6733 req-8f615daa-1a62-431f-bf76-a5071ad72e1e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.290 253516 DEBUG oslo_concurrency.lockutils [req-3004bebf-da7f-4233-b5e0-38cae8ce6733 req-8f615daa-1a62-431f-bf76-a5071ad72e1e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.290 253516 DEBUG nova.compute.manager [req-3004bebf-da7f-4233-b5e0-38cae8ce6733 req-8f615daa-1a62-431f-bf76-a5071ad72e1e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Processing event network-vif-plugged-28155732-58f2-49db-83c4-a44433c25b29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.291 253516 DEBUG nova.compute.manager [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.293 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064355.2930276, 20ce4aa3-c077-4515-86c2-9c414a3cdd3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.293 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] VM Resumed (Lifecycle Event)
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.300 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.302 253516 INFO nova.virt.libvirt.driver [-] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Instance spawned successfully.
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.302 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.308 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.310 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.316 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.316 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.317 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.317 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.317 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.318 253516 DEBUG nova.virt.libvirt.driver [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.320 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:52:35 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:35.351 164791 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 09:52:35 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:35.351 164791 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgnyjxqxd/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 09:52:35 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:35.271 258952 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 09:52:35 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:35.274 258952 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 09:52:35 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:35.276 258952 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 25 09:52:35 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:35.276 258952 INFO oslo.privsep.daemon [-] privsep daemon running as pid 258952
Nov 25 09:52:35 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:35.353 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1b2e5f-0ad8-4b0a-9ded-ccfab41a1774]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.357 253516 INFO nova.compute.manager [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Took 7.88 seconds to spawn the instance on the hypervisor.
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.358 253516 DEBUG nova.compute.manager [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:52:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v640: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 511 B/s wr, 11 op/s
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.427 253516 INFO nova.compute.manager [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Took 8.54 seconds to build instance.
Nov 25 09:52:35 compute-0 nova_compute[253512]: 2025-11-25 09:52:35.438 253516 DEBUG oslo_concurrency.lockutils [None req-92da9455-63f2-4cca-bd67-a67efb1e21d1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:35 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff40089d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:35.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:35 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:35.901 258952 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:35 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:35.902 258952 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:35 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:35.902 258952 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:36.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:36 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:36.540 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[e15178a3-5735-496b-8c05-49a0f78b3c3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:36 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:36.541 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc625dec4-a1 in ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:52:36 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:36.542 258952 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc625dec4-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:52:36 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:36.543 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[2257c7d9-d185-4229-b508-3d0e5b478b78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:36 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:36.546 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[3c46c3d0-ae61-4e3d-a784-cc95bd4f235e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:36 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:36.568 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[50513301-ded1-4970-9156-ee5a50042164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:36 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:36.580 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[12f1f8d7-7b32-4809-81ce-580678eda9a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:36 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:36.581 164791 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpbu8joyvw/privsep.sock']
Nov 25 09:52:36 compute-0 ceph-mon[74207]: pgmap v640: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 511 B/s wr, 11 op/s
Nov 25 09:52:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:36 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff40089d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:37.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:37 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff40089d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:37.031Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:37.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:37.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:37.139 164791 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 09:52:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:37.140 164791 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpbu8joyvw/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 09:52:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:37.062 258968 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 09:52:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:37.065 258968 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 09:52:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:37.067 258968 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 09:52:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:37.067 258968 INFO oslo.privsep.daemon [-] privsep daemon running as pid 258968
Nov 25 09:52:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:37.142 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[43bf0e35-84e2-43dc-bf15-6ed3168c9a8b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:37 compute-0 nova_compute[253512]: 2025-11-25 09:52:37.377 253516 DEBUG nova.compute.manager [req-afced67b-ce31-40ae-9133-1132a58f3e81 req-989fc4cb-0823-4e5e-a6cb-3b894fc51aaf c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Received event network-vif-plugged-28155732-58f2-49db-83c4-a44433c25b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:52:37 compute-0 nova_compute[253512]: 2025-11-25 09:52:37.377 253516 DEBUG oslo_concurrency.lockutils [req-afced67b-ce31-40ae-9133-1132a58f3e81 req-989fc4cb-0823-4e5e-a6cb-3b894fc51aaf c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:37 compute-0 nova_compute[253512]: 2025-11-25 09:52:37.378 253516 DEBUG oslo_concurrency.lockutils [req-afced67b-ce31-40ae-9133-1132a58f3e81 req-989fc4cb-0823-4e5e-a6cb-3b894fc51aaf c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:37 compute-0 nova_compute[253512]: 2025-11-25 09:52:37.378 253516 DEBUG oslo_concurrency.lockutils [req-afced67b-ce31-40ae-9133-1132a58f3e81 req-989fc4cb-0823-4e5e-a6cb-3b894fc51aaf c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:37 compute-0 nova_compute[253512]: 2025-11-25 09:52:37.378 253516 DEBUG nova.compute.manager [req-afced67b-ce31-40ae-9133-1132a58f3e81 req-989fc4cb-0823-4e5e-a6cb-3b894fc51aaf c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] No waiting events found dispatching network-vif-plugged-28155732-58f2-49db-83c4-a44433c25b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:52:37 compute-0 nova_compute[253512]: 2025-11-25 09:52:37.378 253516 WARNING nova.compute.manager [req-afced67b-ce31-40ae-9133-1132a58f3e81 req-989fc4cb-0823-4e5e-a6cb-3b894fc51aaf c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Received unexpected event network-vif-plugged-28155732-58f2-49db-83c4-a44433c25b29 for instance with vm_state active and task_state None.
Nov 25 09:52:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v641: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.7 MiB/s wr, 109 op/s
Nov 25 09:52:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:37.557 258968 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:52:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:37.557 258968 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:52:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:37.557 258968 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:52:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:37 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8002700 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:37 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:52:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:37.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 25 09:52:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 25 09:52:37 compute-0 ceph-mon[74207]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.043 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2bf4a8-95f2-468a-a8f4-9375f7da850b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:38 compute-0 NetworkManager[48903]: <info>  [1764064358.0567] manager: (tapc625dec4-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.055 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[06d8b224-a2d5-49f2-ac9b-7f034d6a95b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.078 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[f9be239b-ca5a-4418-92cc-b3c9000359a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.080 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab59304-d3b1-466e-b074-72bbc87cac4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:38 compute-0 systemd-udevd[258982]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:52:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:38.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:38 compute-0 NetworkManager[48903]: <info>  [1764064358.1032] device (tapc625dec4-a0): carrier: link connected
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.105 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[cddbd7ee-9020-46cc-81d0-fc00436bc977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.120 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[409c7bf0-e36f-4a9f-92d8-a948f2795bcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc625dec4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:1f:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 313867, 'reachable_time': 42662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258992, 'error': None, 'target': 'ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.133 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[fafc0f53-e336-45a5-adef-fcb9b60ce674]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:1fb2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 313867, 'tstamp': 313867}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258993, 'error': None, 'target': 'ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.147 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[455e6189-4441-432e-bd3e-63d2806a6f8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc625dec4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:1f:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 313867, 'reachable_time': 42662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258994, 'error': None, 'target': 'ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.169 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[59ddcea1-ba4b-408c-87b1-012fc1c077a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.217 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fe608c-9e18-4b46-bc51-5b4077fd8f51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.219 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc625dec4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.219 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.219 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc625dec4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:52:38 compute-0 nova_compute[253512]: 2025-11-25 09:52:38.221 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:38 compute-0 kernel: tapc625dec4-a0: entered promiscuous mode
Nov 25 09:52:38 compute-0 NetworkManager[48903]: <info>  [1764064358.2230] manager: (tapc625dec4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.225 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc625dec4-a0, col_values=(('external_ids', {'iface-id': 'e2aede20-7d90-49ee-89b5-efa891e733aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:52:38 compute-0 nova_compute[253512]: 2025-11-25 09:52:38.226 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:38 compute-0 ovn_controller[155020]: 2025-11-25T09:52:38Z|00031|binding|INFO|Releasing lport e2aede20-7d90-49ee-89b5-efa891e733aa from this chassis (sb_readonly=0)
Nov 25 09:52:38 compute-0 nova_compute[253512]: 2025-11-25 09:52:38.242 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.244 164791 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.244 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[0c61a7dc-e3e0-4f22-9df1-9054bf6fbc55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.245 164791 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: global
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     log         /dev/log local0 debug
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     log-tag     haproxy-metadata-proxy-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     user        root
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     group       root
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     maxconn     1024
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     pidfile     /var/lib/neutron/external/pids/c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0.pid.haproxy
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     daemon
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: defaults
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     log global
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     mode http
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     option httplog
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     option dontlognull
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     option http-server-close
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     option forwardfor
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     retries                 3
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     timeout http-request    30s
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     timeout connect         30s
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     timeout client          32s
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     timeout server          32s
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     timeout http-keep-alive 30s
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: listen listener
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     bind 169.254.169.254:80
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:     http-request add-header X-OVN-Network-ID c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:52:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:52:38.246 164791 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0', 'env', 'PROCESS_TAG=haproxy-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:52:38 compute-0 podman[259023]: 2025-11-25 09:52:38.524491664 +0000 UTC m=+0.036649077 container create a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 09:52:38 compute-0 systemd[1]: Started libpod-conmon-a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956.scope.
Nov 25 09:52:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d5ff7440379f9e6476d5041e482c62dbb06d7f034046dcd2104d71d1713c120/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:38 compute-0 podman[259023]: 2025-11-25 09:52:38.575975922 +0000 UTC m=+0.088133336 container init a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 09:52:38 compute-0 podman[259023]: 2025-11-25 09:52:38.581214606 +0000 UTC m=+0.093372019 container start a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 09:52:38 compute-0 podman[259023]: 2025-11-25 09:52:38.506800149 +0000 UTC m=+0.018957582 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:52:38 compute-0 ovn_controller[155020]: 2025-11-25T09:52:38Z|00032|binding|INFO|Releasing lport e2aede20-7d90-49ee-89b5-efa891e733aa from this chassis (sb_readonly=0)
Nov 25 09:52:38 compute-0 NetworkManager[48903]: <info>  [1764064358.6011] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Nov 25 09:52:38 compute-0 NetworkManager[48903]: <info>  [1764064358.6014] device (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:52:38 compute-0 nova_compute[253512]: 2025-11-25 09:52:38.601 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:38 compute-0 NetworkManager[48903]: <info>  [1764064358.6023] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Nov 25 09:52:38 compute-0 NetworkManager[48903]: <info>  [1764064358.6025] device (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:52:38 compute-0 NetworkManager[48903]: <info>  [1764064358.6033] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 25 09:52:38 compute-0 NetworkManager[48903]: <info>  [1764064358.6037] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 25 09:52:38 compute-0 NetworkManager[48903]: <info>  [1764064358.6040] device (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 09:52:38 compute-0 NetworkManager[48903]: <info>  [1764064358.6043] device (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 09:52:38 compute-0 neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0[259035]: [NOTICE]   (259039) : New worker (259041) forked
Nov 25 09:52:38 compute-0 neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0[259035]: [NOTICE]   (259039) : Loading success.
Nov 25 09:52:38 compute-0 ceph-mon[74207]: pgmap v641: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.7 MiB/s wr, 109 op/s
Nov 25 09:52:38 compute-0 ceph-mon[74207]: osdmap e144: 3 total, 3 up, 3 in
Nov 25 09:52:38 compute-0 ovn_controller[155020]: 2025-11-25T09:52:38Z|00033|binding|INFO|Releasing lport e2aede20-7d90-49ee-89b5-efa891e733aa from this chassis (sb_readonly=0)
Nov 25 09:52:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:38 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:38 compute-0 nova_compute[253512]: 2025-11-25 09:52:38.645 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:38 compute-0 nova_compute[253512]: 2025-11-25 09:52:38.647 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:38 compute-0 nova_compute[253512]: 2025-11-25 09:52:38.987 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:39 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:39 compute-0 nova_compute[253512]: 2025-11-25 09:52:39.316 253516 DEBUG nova.compute.manager [req-e58c9721-9ad9-4dd2-af08-71e94548cb7a req-446e8cd5-1e42-453c-9fc1-30b1b1bdfd54 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Received event network-changed-28155732-58f2-49db-83c4-a44433c25b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:52:39 compute-0 nova_compute[253512]: 2025-11-25 09:52:39.317 253516 DEBUG nova.compute.manager [req-e58c9721-9ad9-4dd2-af08-71e94548cb7a req-446e8cd5-1e42-453c-9fc1-30b1b1bdfd54 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Refreshing instance network info cache due to event network-changed-28155732-58f2-49db-83c4-a44433c25b29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:52:39 compute-0 nova_compute[253512]: 2025-11-25 09:52:39.317 253516 DEBUG oslo_concurrency.lockutils [req-e58c9721-9ad9-4dd2-af08-71e94548cb7a req-446e8cd5-1e42-453c-9fc1-30b1b1bdfd54 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:52:39 compute-0 nova_compute[253512]: 2025-11-25 09:52:39.317 253516 DEBUG oslo_concurrency.lockutils [req-e58c9721-9ad9-4dd2-af08-71e94548cb7a req-446e8cd5-1e42-453c-9fc1-30b1b1bdfd54 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:52:39 compute-0 nova_compute[253512]: 2025-11-25 09:52:39.317 253516 DEBUG nova.network.neutron [req-e58c9721-9ad9-4dd2-af08-71e94548cb7a req-446e8cd5-1e42-453c-9fc1-30b1b1bdfd54 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Refreshing network info cache for port 28155732-58f2-49db-83c4-a44433c25b29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:52:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v643: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 98 op/s
Nov 25 09:52:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:39 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:39 compute-0 nova_compute[253512]: 2025-11-25 09:52:39.619 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:39.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:40.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:52:40] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 25 09:52:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:52:40] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 25 09:52:40 compute-0 ceph-mon[74207]: pgmap v643: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 98 op/s
Nov 25 09:52:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:40 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8002700 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:41 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095241 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:52:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v644: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 25 09:52:41 compute-0 sudo[259048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:52:41 compute-0 sudo[259048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:41 compute-0 sudo[259048]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:41 compute-0 nova_compute[253512]: 2025-11-25 09:52:41.530 253516 DEBUG nova.network.neutron [req-e58c9721-9ad9-4dd2-af08-71e94548cb7a req-446e8cd5-1e42-453c-9fc1-30b1b1bdfd54 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Updated VIF entry in instance network info cache for port 28155732-58f2-49db-83c4-a44433c25b29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:52:41 compute-0 nova_compute[253512]: 2025-11-25 09:52:41.530 253516 DEBUG nova.network.neutron [req-e58c9721-9ad9-4dd2-af08-71e94548cb7a req-446e8cd5-1e42-453c-9fc1-30b1b1bdfd54 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Updating instance_info_cache with network_info: [{"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:52:41 compute-0 nova_compute[253512]: 2025-11-25 09:52:41.547 253516 DEBUG oslo_concurrency.lockutils [req-e58c9721-9ad9-4dd2-af08-71e94548cb7a req-446e8cd5-1e42-453c-9fc1-30b1b1bdfd54 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:52:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:41 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:41.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:42.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:42 compute-0 ceph-mon[74207]: pgmap v644: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 25 09:52:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:42 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:43 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8002700 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v645: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 25 09:52:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:43 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400bcd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:43.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:43 compute-0 nova_compute[253512]: 2025-11-25 09:52:43.990 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:44.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:44 compute-0 nova_compute[253512]: 2025-11-25 09:52:44.620 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:44 compute-0 ceph-mon[74207]: pgmap v645: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 25 09:52:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:44 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400bcd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:44 compute-0 sudo[259077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:52:44 compute-0 sudo[259077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:44 compute-0 sudo[259077]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:44 compute-0 sudo[259102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 25 09:52:44 compute-0 sudo[259102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:52:44
Nov 25 09:52:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:52:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:52:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['default.rgw.meta', '.nfs', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', '.mgr']
Nov 25 09:52:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:52:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:52:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:52:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:52:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:52:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:52:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:52:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:52:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:52:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:52:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:52:44 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-0 sudo[259102]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:52:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:52:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:45 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400bcd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:52:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:52:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:52:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:52:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:52:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:52:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:52:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:52:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:52:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:52:45 compute-0 sudo[259146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:52:45 compute-0 sudo[259146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:45 compute-0 sudo[259146]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:45 compute-0 sudo[259178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:52:45 compute-0 sudo[259178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:45 compute-0 podman[259170]: 2025-11-25 09:52:45.169595848 +0000 UTC m=+0.082101568 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 09:52:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v646: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 25 09:52:45 compute-0 sudo[259178]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:52:45 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:52:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:52:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:52:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:52:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:45 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x5632768251c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:52:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:52:45 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:52:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:52:45 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:52:45 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:52:45 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:52:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:52:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:52:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:52:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:52:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:52:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:52:45 compute-0 sudo[259242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:52:45 compute-0 sudo[259242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:45 compute-0 sudo[259242]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:45 compute-0 sudo[259267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:52:45 compute-0 sudo[259267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:45 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 09:52:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:46.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:46 compute-0 podman[259327]: 2025-11-25 09:52:46.150586138 +0000 UTC m=+0.053241151 container create fe8d9b2c2a038dd0bb862766987ea9e933bf345ee9b051247e9a706317bea5c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_feynman, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:52:46 compute-0 systemd[1]: Started libpod-conmon-fe8d9b2c2a038dd0bb862766987ea9e933bf345ee9b051247e9a706317bea5c7.scope.
Nov 25 09:52:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:52:46 compute-0 podman[259327]: 2025-11-25 09:52:46.21267345 +0000 UTC m=+0.115328454 container init fe8d9b2c2a038dd0bb862766987ea9e933bf345ee9b051247e9a706317bea5c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:52:46 compute-0 podman[259327]: 2025-11-25 09:52:46.224827418 +0000 UTC m=+0.127482431 container start fe8d9b2c2a038dd0bb862766987ea9e933bf345ee9b051247e9a706317bea5c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_feynman, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:52:46 compute-0 podman[259327]: 2025-11-25 09:52:46.227137361 +0000 UTC m=+0.129792384 container attach fe8d9b2c2a038dd0bb862766987ea9e933bf345ee9b051247e9a706317bea5c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:52:46 compute-0 flamboyant_feynman[259339]: 167 167
Nov 25 09:52:46 compute-0 podman[259327]: 2025-11-25 09:52:46.13378474 +0000 UTC m=+0.036439783 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:52:46 compute-0 systemd[1]: libpod-fe8d9b2c2a038dd0bb862766987ea9e933bf345ee9b051247e9a706317bea5c7.scope: Deactivated successfully.
Nov 25 09:52:46 compute-0 podman[259327]: 2025-11-25 09:52:46.231742862 +0000 UTC m=+0.134397895 container died fe8d9b2c2a038dd0bb862766987ea9e933bf345ee9b051247e9a706317bea5c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_feynman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:52:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb70fe8c4f6f498685a3e8c1070aaafb140675cdb3f6fdf0fa39253001334c9c-merged.mount: Deactivated successfully.
Nov 25 09:52:46 compute-0 podman[259327]: 2025-11-25 09:52:46.259521519 +0000 UTC m=+0.162176532 container remove fe8d9b2c2a038dd0bb862766987ea9e933bf345ee9b051247e9a706317bea5c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_feynman, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:52:46 compute-0 systemd[1]: libpod-conmon-fe8d9b2c2a038dd0bb862766987ea9e933bf345ee9b051247e9a706317bea5c7.scope: Deactivated successfully.
Nov 25 09:52:46 compute-0 podman[259362]: 2025-11-25 09:52:46.403353445 +0000 UTC m=+0.038605575 container create 55dd19717e7218628773efaf36c37c7d6567b293c55224c09269753d8c7edc6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 25 09:52:46 compute-0 systemd[1]: Started libpod-conmon-55dd19717e7218628773efaf36c37c7d6567b293c55224c09269753d8c7edc6c.scope.
Nov 25 09:52:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5c2ae6da7cb11d41e41715c7951875236a33e88e8942fd1e2b6b6079352c21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5c2ae6da7cb11d41e41715c7951875236a33e88e8942fd1e2b6b6079352c21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5c2ae6da7cb11d41e41715c7951875236a33e88e8942fd1e2b6b6079352c21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5c2ae6da7cb11d41e41715c7951875236a33e88e8942fd1e2b6b6079352c21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5c2ae6da7cb11d41e41715c7951875236a33e88e8942fd1e2b6b6079352c21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:46 compute-0 podman[259362]: 2025-11-25 09:52:46.472765935 +0000 UTC m=+0.108018084 container init 55dd19717e7218628773efaf36c37c7d6567b293c55224c09269753d8c7edc6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:52:46 compute-0 podman[259362]: 2025-11-25 09:52:46.47850933 +0000 UTC m=+0.113761459 container start 55dd19717e7218628773efaf36c37c7d6567b293c55224c09269753d8c7edc6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_margulis, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:52:46 compute-0 podman[259362]: 2025-11-25 09:52:46.479741012 +0000 UTC m=+0.114993141 container attach 55dd19717e7218628773efaf36c37c7d6567b293c55224c09269753d8c7edc6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:52:46 compute-0 podman[259362]: 2025-11-25 09:52:46.38623478 +0000 UTC m=+0.021486929 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:52:46 compute-0 ceph-mon[74207]: pgmap v646: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 25 09:52:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:46 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400bcd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:46 compute-0 jolly_margulis[259375]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:52:46 compute-0 jolly_margulis[259375]: --> All data devices are unavailable
Nov 25 09:52:46 compute-0 systemd[1]: libpod-55dd19717e7218628773efaf36c37c7d6567b293c55224c09269753d8c7edc6c.scope: Deactivated successfully.
Nov 25 09:52:46 compute-0 conmon[259375]: conmon 55dd19717e7218628773 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55dd19717e7218628773efaf36c37c7d6567b293c55224c09269753d8c7edc6c.scope/container/memory.events
Nov 25 09:52:46 compute-0 podman[259390]: 2025-11-25 09:52:46.839833026 +0000 UTC m=+0.026204461 container died 55dd19717e7218628773efaf36c37c7d6567b293c55224c09269753d8c7edc6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_margulis, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:52:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad5c2ae6da7cb11d41e41715c7951875236a33e88e8942fd1e2b6b6079352c21-merged.mount: Deactivated successfully.
Nov 25 09:52:46 compute-0 podman[259390]: 2025-11-25 09:52:46.875828741 +0000 UTC m=+0.062200176 container remove 55dd19717e7218628773efaf36c37c7d6567b293c55224c09269753d8c7edc6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:52:46 compute-0 systemd[1]: libpod-conmon-55dd19717e7218628773efaf36c37c7d6567b293c55224c09269753d8c7edc6c.scope: Deactivated successfully.
Nov 25 09:52:46 compute-0 sudo[259267]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:46 compute-0 sudo[259402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:52:46 compute-0 sudo[259402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:46 compute-0 sudo[259402]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:47 compute-0 sudo[259427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:52:47 compute-0 sudo[259427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:47.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:47.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:47.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:47.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:47 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400bcd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:47 compute-0 podman[259485]: 2025-11-25 09:52:47.378883383 +0000 UTC m=+0.029866473 container create b569c56ccac4b78960f3362c1e64192a25b1f4769648616ff5ebf63e4a1d3a9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:52:47 compute-0 systemd[1]: Started libpod-conmon-b569c56ccac4b78960f3362c1e64192a25b1f4769648616ff5ebf63e4a1d3a9c.scope.
Nov 25 09:52:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v647: 337 pgs: 337 active+clean; 109 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.4 MiB/s wr, 94 op/s
Nov 25 09:52:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:52:47 compute-0 podman[259485]: 2025-11-25 09:52:47.445417839 +0000 UTC m=+0.096400929 container init b569c56ccac4b78960f3362c1e64192a25b1f4769648616ff5ebf63e4a1d3a9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_sutherland, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 09:52:47 compute-0 podman[259485]: 2025-11-25 09:52:47.450965024 +0000 UTC m=+0.101948114 container start b569c56ccac4b78960f3362c1e64192a25b1f4769648616ff5ebf63e4a1d3a9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Nov 25 09:52:47 compute-0 podman[259485]: 2025-11-25 09:52:47.452286805 +0000 UTC m=+0.103269885 container attach b569c56ccac4b78960f3362c1e64192a25b1f4769648616ff5ebf63e4a1d3a9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_sutherland, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:52:47 compute-0 vigilant_sutherland[259498]: 167 167
Nov 25 09:52:47 compute-0 systemd[1]: libpod-b569c56ccac4b78960f3362c1e64192a25b1f4769648616ff5ebf63e4a1d3a9c.scope: Deactivated successfully.
Nov 25 09:52:47 compute-0 podman[259485]: 2025-11-25 09:52:47.455333367 +0000 UTC m=+0.106316446 container died b569c56ccac4b78960f3362c1e64192a25b1f4769648616ff5ebf63e4a1d3a9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:52:47 compute-0 podman[259485]: 2025-11-25 09:52:47.367237933 +0000 UTC m=+0.018221024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a0d5a19521b7d3082712db5a44116429ea752c3c2e91a903f2279683e8cbe8e-merged.mount: Deactivated successfully.
Nov 25 09:52:47 compute-0 podman[259485]: 2025-11-25 09:52:47.479369139 +0000 UTC m=+0.130352220 container remove b569c56ccac4b78960f3362c1e64192a25b1f4769648616ff5ebf63e4a1d3a9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 25 09:52:47 compute-0 systemd[1]: libpod-conmon-b569c56ccac4b78960f3362c1e64192a25b1f4769648616ff5ebf63e4a1d3a9c.scope: Deactivated successfully.
Nov 25 09:52:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:47 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8002700 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:47 compute-0 podman[259520]: 2025-11-25 09:52:47.63117244 +0000 UTC m=+0.040756659 container create 9a8d1a23a00858ece07c6203dc6cde335d6a64b5197a15f93c0ff0f9597a836f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_austin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:52:47 compute-0 systemd[1]: Started libpod-conmon-9a8d1a23a00858ece07c6203dc6cde335d6a64b5197a15f93c0ff0f9597a836f.scope.
Nov 25 09:52:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/673daa167f635d1f876731698a13bd8da3f396adf7ed442784ce67f9ad5d3cd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/673daa167f635d1f876731698a13bd8da3f396adf7ed442784ce67f9ad5d3cd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/673daa167f635d1f876731698a13bd8da3f396adf7ed442784ce67f9ad5d3cd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/673daa167f635d1f876731698a13bd8da3f396adf7ed442784ce67f9ad5d3cd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:47 compute-0 podman[259520]: 2025-11-25 09:52:47.705171574 +0000 UTC m=+0.114755793 container init 9a8d1a23a00858ece07c6203dc6cde335d6a64b5197a15f93c0ff0f9597a836f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 09:52:47 compute-0 podman[259520]: 2025-11-25 09:52:47.615570303 +0000 UTC m=+0.025154542 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:52:47 compute-0 podman[259520]: 2025-11-25 09:52:47.711804616 +0000 UTC m=+0.121388835 container start 9a8d1a23a00858ece07c6203dc6cde335d6a64b5197a15f93c0ff0f9597a836f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:52:47 compute-0 podman[259520]: 2025-11-25 09:52:47.713022531 +0000 UTC m=+0.122606741 container attach 9a8d1a23a00858ece07c6203dc6cde335d6a64b5197a15f93c0ff0f9597a836f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:52:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:52:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:47.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:52:47 compute-0 ovn_controller[155020]: 2025-11-25T09:52:47Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:fd:4d 10.100.0.4
Nov 25 09:52:47 compute-0 ovn_controller[155020]: 2025-11-25T09:52:47Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:fd:4d 10.100.0.4
Nov 25 09:52:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:47 compute-0 pensive_austin[259533]: {
Nov 25 09:52:47 compute-0 pensive_austin[259533]:     "1": [
Nov 25 09:52:47 compute-0 pensive_austin[259533]:         {
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             "devices": [
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "/dev/loop3"
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             ],
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             "lv_name": "ceph_lv0",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             "lv_size": "21470642176",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             "name": "ceph_lv0",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             "tags": {
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.cluster_name": "ceph",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.crush_device_class": "",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.encrypted": "0",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.osd_id": "1",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.type": "block",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.vdo": "0",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:                 "ceph.with_tpm": "0"
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             },
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             "type": "block",
Nov 25 09:52:47 compute-0 pensive_austin[259533]:             "vg_name": "ceph_vg0"
Nov 25 09:52:47 compute-0 pensive_austin[259533]:         }
Nov 25 09:52:47 compute-0 pensive_austin[259533]:     ]
Nov 25 09:52:47 compute-0 pensive_austin[259533]: }
Nov 25 09:52:47 compute-0 systemd[1]: libpod-9a8d1a23a00858ece07c6203dc6cde335d6a64b5197a15f93c0ff0f9597a836f.scope: Deactivated successfully.
Nov 25 09:52:47 compute-0 conmon[259533]: conmon 9a8d1a23a00858ece07c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a8d1a23a00858ece07c6203dc6cde335d6a64b5197a15f93c0ff0f9597a836f.scope/container/memory.events
Nov 25 09:52:47 compute-0 podman[259520]: 2025-11-25 09:52:47.982041672 +0000 UTC m=+0.391625891 container died 9a8d1a23a00858ece07c6203dc6cde335d6a64b5197a15f93c0ff0f9597a836f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_austin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 09:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-673daa167f635d1f876731698a13bd8da3f396adf7ed442784ce67f9ad5d3cd5-merged.mount: Deactivated successfully.
Nov 25 09:52:48 compute-0 podman[259520]: 2025-11-25 09:52:48.005203878 +0000 UTC m=+0.414788097 container remove 9a8d1a23a00858ece07c6203dc6cde335d6a64b5197a15f93c0ff0f9597a836f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_austin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:52:48 compute-0 systemd[1]: libpod-conmon-9a8d1a23a00858ece07c6203dc6cde335d6a64b5197a15f93c0ff0f9597a836f.scope: Deactivated successfully.
Nov 25 09:52:48 compute-0 sudo[259427]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:48 compute-0 sudo[259554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:52:48 compute-0 sudo[259554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:48 compute-0 sudo[259554]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:48.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:48 compute-0 sudo[259579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:52:48 compute-0 sudo[259579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:48 compute-0 podman[259633]: 2025-11-25 09:52:48.475214649 +0000 UTC m=+0.037185099 container create 745073fe75b8ed5daf0fd371f3caa203f6ede1fc31ccd6b862fae5f40ec4d0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_margulis, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 09:52:48 compute-0 systemd[1]: Started libpod-conmon-745073fe75b8ed5daf0fd371f3caa203f6ede1fc31ccd6b862fae5f40ec4d0f9.scope.
Nov 25 09:52:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:52:48 compute-0 podman[259633]: 2025-11-25 09:52:48.535695884 +0000 UTC m=+0.097666334 container init 745073fe75b8ed5daf0fd371f3caa203f6ede1fc31ccd6b862fae5f40ec4d0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:52:48 compute-0 podman[259633]: 2025-11-25 09:52:48.541080153 +0000 UTC m=+0.103050603 container start 745073fe75b8ed5daf0fd371f3caa203f6ede1fc31ccd6b862fae5f40ec4d0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_margulis, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:52:48 compute-0 podman[259633]: 2025-11-25 09:52:48.542057053 +0000 UTC m=+0.104027494 container attach 745073fe75b8ed5daf0fd371f3caa203f6ede1fc31ccd6b862fae5f40ec4d0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:52:48 compute-0 upbeat_margulis[259647]: 167 167
Nov 25 09:52:48 compute-0 systemd[1]: libpod-745073fe75b8ed5daf0fd371f3caa203f6ede1fc31ccd6b862fae5f40ec4d0f9.scope: Deactivated successfully.
Nov 25 09:52:48 compute-0 podman[259633]: 2025-11-25 09:52:48.54661786 +0000 UTC m=+0.108588321 container died 745073fe75b8ed5daf0fd371f3caa203f6ede1fc31ccd6b862fae5f40ec4d0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 25 09:52:48 compute-0 podman[259633]: 2025-11-25 09:52:48.460103797 +0000 UTC m=+0.022074247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cead5f47d109421fee045fa03e8297db181a9d55133de16391ddfb12666cd9e-merged.mount: Deactivated successfully.
Nov 25 09:52:48 compute-0 podman[259633]: 2025-11-25 09:52:48.570329712 +0000 UTC m=+0.132300152 container remove 745073fe75b8ed5daf0fd371f3caa203f6ede1fc31ccd6b862fae5f40ec4d0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_margulis, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 09:52:48 compute-0 systemd[1]: libpod-conmon-745073fe75b8ed5daf0fd371f3caa203f6ede1fc31ccd6b862fae5f40ec4d0f9.scope: Deactivated successfully.
Nov 25 09:52:48 compute-0 ceph-mon[74207]: pgmap v647: 337 pgs: 337 active+clean; 109 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.4 MiB/s wr, 94 op/s
Nov 25 09:52:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:48 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x5632768251c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:48 compute-0 podman[259669]: 2025-11-25 09:52:48.722122602 +0000 UTC m=+0.035526662 container create 9a9b539e927ddd28271c45c2daf1be53bc8a50ecb7867f242fcf364aa0e4c7a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_mestorf, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:52:48 compute-0 systemd[1]: Started libpod-conmon-9a9b539e927ddd28271c45c2daf1be53bc8a50ecb7867f242fcf364aa0e4c7a6.scope.
Nov 25 09:52:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7500dde8f16fe662e30d9e4a3844e7e20f8733d9a94e67e9da97824122898b4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7500dde8f16fe662e30d9e4a3844e7e20f8733d9a94e67e9da97824122898b4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7500dde8f16fe662e30d9e4a3844e7e20f8733d9a94e67e9da97824122898b4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7500dde8f16fe662e30d9e4a3844e7e20f8733d9a94e67e9da97824122898b4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:52:48 compute-0 podman[259669]: 2025-11-25 09:52:48.79871771 +0000 UTC m=+0.112121779 container init 9a9b539e927ddd28271c45c2daf1be53bc8a50ecb7867f242fcf364aa0e4c7a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_mestorf, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:52:48 compute-0 podman[259669]: 2025-11-25 09:52:48.804248194 +0000 UTC m=+0.117652253 container start 9a9b539e927ddd28271c45c2daf1be53bc8a50ecb7867f242fcf364aa0e4c7a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_mestorf, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:52:48 compute-0 podman[259669]: 2025-11-25 09:52:48.710260024 +0000 UTC m=+0.023664103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:52:48 compute-0 podman[259669]: 2025-11-25 09:52:48.809612434 +0000 UTC m=+0.123016503 container attach 9a9b539e927ddd28271c45c2daf1be53bc8a50ecb7867f242fcf364aa0e4c7a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_mestorf, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:52:48 compute-0 nova_compute[253512]: 2025-11-25 09:52:48.991 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:49 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400bcd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:49 compute-0 lvm[259758]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:52:49 compute-0 suspicious_mestorf[259683]: {}
Nov 25 09:52:49 compute-0 lvm[259758]: VG ceph_vg0 finished
Nov 25 09:52:49 compute-0 systemd[1]: libpod-9a9b539e927ddd28271c45c2daf1be53bc8a50ecb7867f242fcf364aa0e4c7a6.scope: Deactivated successfully.
Nov 25 09:52:49 compute-0 podman[259669]: 2025-11-25 09:52:49.405592333 +0000 UTC m=+0.718996402 container died 9a9b539e927ddd28271c45c2daf1be53bc8a50ecb7867f242fcf364aa0e4c7a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_mestorf, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7500dde8f16fe662e30d9e4a3844e7e20f8733d9a94e67e9da97824122898b4f-merged.mount: Deactivated successfully.
Nov 25 09:52:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v648: 337 pgs: 337 active+clean; 109 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 82 op/s
Nov 25 09:52:49 compute-0 podman[259669]: 2025-11-25 09:52:49.433301559 +0000 UTC m=+0.746705619 container remove 9a9b539e927ddd28271c45c2daf1be53bc8a50ecb7867f242fcf364aa0e4c7a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:52:49 compute-0 systemd[1]: libpod-conmon-9a9b539e927ddd28271c45c2daf1be53bc8a50ecb7867f242fcf364aa0e4c7a6.scope: Deactivated successfully.
Nov 25 09:52:49 compute-0 sudo[259579]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:52:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:49 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:52:49 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:49 compute-0 sudo[259772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:52:49 compute-0 sudo[259772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:52:49 compute-0 sudo[259772]: pam_unix(sudo:session): session closed for user root
Nov 25 09:52:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:49 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5632768251c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:49 compute-0 nova_compute[253512]: 2025-11-25 09:52:49.623 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:49.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:50.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:52:50] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 25 09:52:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:52:50] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 25 09:52:50 compute-0 ceph-mon[74207]: pgmap v648: 337 pgs: 337 active+clean; 109 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 82 op/s
Nov 25 09:52:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:52:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:50 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400bcd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:51 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x5632768251c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v649: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 102 op/s
Nov 25 09:52:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:51 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400c840 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:51.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:52.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:52 compute-0 ceph-mon[74207]: pgmap v649: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 102 op/s
Nov 25 09:52:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:52 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8003fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:53 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400c840 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:53 compute-0 nova_compute[253512]: 2025-11-25 09:52:53.380 253516 INFO nova.compute.manager [None req-5791afe5-a3cd-4dbf-98bc-d7ab44614197 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Get console output
Nov 25 09:52:53 compute-0 nova_compute[253512]: 2025-11-25 09:52:53.384 253516 INFO oslo.privsep.daemon [None req-5791afe5-a3cd-4dbf-98bc-d7ab44614197 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpaftzgd4m/privsep.sock']
Nov 25 09:52:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v650: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:52:53 compute-0 podman[259804]: 2025-11-25 09:52:53.530701501 +0000 UTC m=+0.078756710 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 09:52:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:53 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x5632768251c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:53.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:53 compute-0 nova_compute[253512]: 2025-11-25 09:52:53.990 253516 INFO oslo.privsep.daemon [None req-5791afe5-a3cd-4dbf-98bc-d7ab44614197 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Spawned new privsep daemon via rootwrap
Nov 25 09:52:53 compute-0 nova_compute[253512]: 2025-11-25 09:52:53.992 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:53 compute-0 nova_compute[253512]: 2025-11-25 09:52:53.879 259829 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 09:52:53 compute-0 nova_compute[253512]: 2025-11-25 09:52:53.884 259829 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 09:52:53 compute-0 nova_compute[253512]: 2025-11-25 09:52:53.886 259829 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 09:52:53 compute-0 nova_compute[253512]: 2025-11-25 09:52:53.886 259829 INFO oslo.privsep.daemon [-] privsep daemon running as pid 259829
Nov 25 09:52:54 compute-0 nova_compute[253512]: 2025-11-25 09:52:54.072 259829 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 09:52:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:52:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:54.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:52:54 compute-0 ceph-mon[74207]: pgmap v650: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:52:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1195725717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:52:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1195725717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:52:54 compute-0 nova_compute[253512]: 2025-11-25 09:52:54.623 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:54 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400c9c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:55 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8003fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:52:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v651: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:52:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:55 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400c9c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:55.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:56.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:56 compute-0 ceph-mon[74207]: pgmap v651: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:52:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:56 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x5632768251c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:57.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:57.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:57.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:52:57.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:52:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:57 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400c9c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v652: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:52:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:57 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8003fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:57.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:52:57 compute-0 podman[259836]: 2025-11-25 09:52:57.976340472 +0000 UTC m=+0.044458172 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:52:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:52:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:58.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:52:58 compute-0 ceph-mon[74207]: pgmap v652: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:52:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:58 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400cb60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:58 compute-0 nova_compute[253512]: 2025-11-25 09:52:58.994 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:59 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x5632768251c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v653: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 106 KiB/s wr, 24 op/s
Nov 25 09:52:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:52:59 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8003fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:52:59 compute-0 nova_compute[253512]: 2025-11-25 09:52:59.625 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:52:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:52:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:52:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:59.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:52:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:52:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:00.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:53:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 25 09:53:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:53:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 25 09:53:00 compute-0 ceph-mon[74207]: pgmap v653: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 106 KiB/s wr, 24 op/s
Nov 25 09:53:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:00 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8003fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:01 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400cb80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v654: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 111 KiB/s wr, 24 op/s
Nov 25 09:53:01 compute-0 sudo[259857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:53:01 compute-0 sudo[259857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:01 compute-0 sudo[259857]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:01 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x5632768251c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:01.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:02.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:02 compute-0 ceph-mon[74207]: pgmap v654: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 111 KiB/s wr, 24 op/s
Nov 25 09:53:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:02 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8004c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:03 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8004c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v655: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 09:53:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:03 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8004c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:03.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:03 compute-0 nova_compute[253512]: 2025-11-25 09:53:03.996 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:04.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:04 compute-0 ceph-mon[74207]: pgmap v655: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 09:53:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/756321123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:04 compute-0 nova_compute[253512]: 2025-11-25 09:53:04.627 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:04 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400cba0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:05 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8004c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:05.381 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:05.382 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:05.382 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v656: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 09:53:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:05 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40080039c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:05.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:06.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:06 compute-0 ceph-mon[74207]: pgmap v656: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 09:53:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:06 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8004c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:07.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:07.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:07.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:07.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:07 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400cba0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v657: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 09:53:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:07 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8004c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:07.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:08.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:08 compute-0 ceph-mon[74207]: pgmap v657: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 09:53:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:08 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8004c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:08 compute-0 nova_compute[253512]: 2025-11-25 09:53:08.998 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:09 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8004c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v658: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 09:53:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2638503104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:53:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2800833503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:53:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:09 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8004c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:09 compute-0 nova_compute[253512]: 2025-11-25 09:53:09.628 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:09.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:10.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:53:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 25 09:53:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:53:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 25 09:53:10 compute-0 ceph-mon[74207]: pgmap v658: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 09:53:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:10 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40080044e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:11 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe80062d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v659: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 25 09:53:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:11 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe80062d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:11.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:53:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:12.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:53:12 compute-0 ceph-mon[74207]: pgmap v659: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 25 09:53:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:12 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400cba0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:13 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4008004e00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v660: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 25 09:53:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:13 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe80062d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:14 compute-0 nova_compute[253512]: 2025-11-25 09:53:14.000 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:14.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:14 compute-0 ceph-mon[74207]: pgmap v660: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 25 09:53:14 compute-0 nova_compute[253512]: 2025-11-25 09:53:14.630 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:14 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe80062d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:53:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:53:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:53:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:53:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:53:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:53:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:53:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:15 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400cba0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v661: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 25 09:53:15 compute-0 nova_compute[253512]: 2025-11-25 09:53:15.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:15 compute-0 nova_compute[253512]: 2025-11-25 09:53:15.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:53:15 compute-0 nova_compute[253512]: 2025-11-25 09:53:15.482 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:53:15 compute-0 nova_compute[253512]: 2025-11-25 09:53:15.482 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:15 compute-0 nova_compute[253512]: 2025-11-25 09:53:15.482 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:53:15 compute-0 nova_compute[253512]: 2025-11-25 09:53:15.488 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:15 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4008004e00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:15.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:15 compute-0 podman[259899]: 2025-11-25 09:53:15.983089523 +0000 UTC m=+0.045048495 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:53:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:16.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:16 compute-0 ceph-mon[74207]: pgmap v661: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 25 09:53:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:16 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe80062d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:16 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:16.965 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:53:16 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:16.965 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:53:16 compute-0 nova_compute[253512]: 2025-11-25 09:53:16.966 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:17.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:17.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:17.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:17.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe80062d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v662: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 25 09:53:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400cbc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:53:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1230150893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:17.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:18.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:18 compute-0 nova_compute[253512]: 2025-11-25 09:53:18.478 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:18 compute-0 ceph-mon[74207]: pgmap v662: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 25 09:53:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1230150893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/850203280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:18 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4008004e00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:19 compute-0 nova_compute[253512]: 2025-11-25 09:53:19.001 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:19 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v663: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:53:19 compute-0 nova_compute[253512]: 2025-11-25 09:53:19.487 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:19 compute-0 nova_compute[253512]: 2025-11-25 09:53:19.487 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:19 compute-0 nova_compute[253512]: 2025-11-25 09:53:19.487 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/259020438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3047445895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:19 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:19 compute-0 nova_compute[253512]: 2025-11-25 09:53:19.632 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:19.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:20.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:53:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 25 09:53:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:53:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.473 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.473 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.486 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.487 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.487 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.487 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.488 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:20 compute-0 ceph-mon[74207]: pgmap v663: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:53:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:20 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f400c002600 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:53:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1336917698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.836 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.880 253516 DEBUG nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:53:20 compute-0 nova_compute[253512]: 2025-11-25 09:53:20.880 253516 DEBUG nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:53:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:21 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140bf8d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.097 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.098 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4471MB free_disk=59.92179870605469GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.099 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.099 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.184 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Instance 20ce4aa3-c077-4515-86c2-9c414a3cdd3e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.184 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.184 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.229 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Refreshing inventories for resource provider d9873737-caae-40cc-9346-77a33537057c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.269 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Updating ProviderTree inventory for provider d9873737-caae-40cc-9346-77a33537057c from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.269 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Updating inventory in ProviderTree for provider d9873737-caae-40cc-9346-77a33537057c with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.282 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Refreshing aggregate associations for resource provider d9873737-caae-40cc-9346-77a33537057c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.310 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Refreshing trait associations for resource provider d9873737-caae-40cc-9346-77a33537057c, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX512VPCLMULQDQ,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX512VAES,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.338 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v664: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Nov 25 09:53:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:21 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1336917698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:21 compute-0 sudo[259964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:53:21 compute-0 sudo[259964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:21 compute-0 sudo[259964]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:53:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1424670342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.713 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.717 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Updating inventory in ProviderTree for provider d9873737-caae-40cc-9346-77a33537057c with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.754 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Updated inventory for provider d9873737-caae-40cc-9346-77a33537057c with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.754 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Updating resource provider d9873737-caae-40cc-9346-77a33537057c generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.755 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Updating inventory in ProviderTree for provider d9873737-caae-40cc-9346-77a33537057c with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.770 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:53:21 compute-0 nova_compute[253512]: 2025-11-25 09:53:21.770 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:53:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:21.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:53:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:22.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:22 compute-0 ceph-mon[74207]: pgmap v664: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Nov 25 09:53:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1424670342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:22 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:22 compute-0 nova_compute[253512]: 2025-11-25 09:53:22.769 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:22 compute-0 nova_compute[253512]: 2025-11-25 09:53:22.769 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:53:22 compute-0 nova_compute[253512]: 2025-11-25 09:53:22.769 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:53:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:23 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f400c003140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:23 compute-0 nova_compute[253512]: 2025-11-25 09:53:23.119 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:53:23 compute-0 nova_compute[253512]: 2025-11-25 09:53:23.120 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquired lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:53:23 compute-0 nova_compute[253512]: 2025-11-25 09:53:23.120 253516 DEBUG nova.network.neutron [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:53:23 compute-0 nova_compute[253512]: 2025-11-25 09:53:23.120 253516 DEBUG nova.objects.instance [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 20ce4aa3-c077-4515-86c2-9c414a3cdd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:53:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v665: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 70 op/s
Nov 25 09:53:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:23 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c03d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:23.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:23 compute-0 podman[259995]: 2025-11-25 09:53:23.996408857 +0000 UTC m=+0.055932121 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:53:24 compute-0 nova_compute[253512]: 2025-11-25 09:53:24.003 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:24 compute-0 nova_compute[253512]: 2025-11-25 09:53:24.136 253516 DEBUG nova.network.neutron [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Updating instance_info_cache with network_info: [{"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:53:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:24.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:24 compute-0 nova_compute[253512]: 2025-11-25 09:53:24.149 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Releasing lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:53:24 compute-0 nova_compute[253512]: 2025-11-25 09:53:24.149 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:53:24 compute-0 nova_compute[253512]: 2025-11-25 09:53:24.149 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:53:24 compute-0 nova_compute[253512]: 2025-11-25 09:53:24.635 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:24 compute-0 ceph-mon[74207]: pgmap v665: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 70 op/s
Nov 25 09:53:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:24 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:25 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v666: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 70 op/s
Nov 25 09:53:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:25 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f400c003140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:25.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:26.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:26 compute-0 ceph-mon[74207]: pgmap v666: 337 pgs: 337 active+clean; 167 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 70 op/s
Nov 25 09:53:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:26 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c03d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:26 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:26.967 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:53:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:27.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:27.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:27.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:27.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:27 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v667: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 25 09:53:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:27 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:27.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:28.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:28 compute-0 ceph-mon[74207]: pgmap v667: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 25 09:53:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:28 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:28 compute-0 podman[260022]: 2025-11-25 09:53:28.974463379 +0000 UTC m=+0.041984308 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 09:53:29 compute-0 nova_compute[253512]: 2025-11-25 09:53:29.004 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:29 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c10e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v668: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:53:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:29 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:29 compute-0 nova_compute[253512]: 2025-11-25 09:53:29.637 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:29.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:53:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:30.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:53:30] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 25 09:53:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:53:30] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 25 09:53:30 compute-0 ceph-mon[74207]: pgmap v668: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:53:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:30 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f400c0041c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:31 compute-0 rsyslogd[961]: imjournal: 1431 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 09:53:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v669: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:53:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:31 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c10e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:31.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:32.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:32 compute-0 ceph-mon[74207]: pgmap v669: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:53:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:32 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:33 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f400c0041c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v670: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:53:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:33 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:33.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:34 compute-0 nova_compute[253512]: 2025-11-25 09:53:34.006 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:34.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:34 compute-0 nova_compute[253512]: 2025-11-25 09:53:34.638 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:34 compute-0 ceph-mon[74207]: pgmap v670: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:53:34 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3778458874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:34 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c1df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:35 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v671: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:53:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:35 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f400c0041c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:35.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:36.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:36 compute-0 ceph-mon[74207]: pgmap v671: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:53:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:36 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:37.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:37.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:37.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:37.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:37 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c1df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:37 compute-0 ovn_controller[155020]: 2025-11-25T09:53:37Z|00034|binding|INFO|Releasing lport e2aede20-7d90-49ee-89b5-efa891e733aa from this chassis (sb_readonly=0)
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.108 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v672: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 302 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Nov 25 09:53:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:37 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:37.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.823 253516 DEBUG nova.compute.manager [req-cbe65252-8608-40c7-9e00-821a6b3aa35f req-1c1f7204-cf3f-4c5f-b281-3175c05b95ff c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Received event network-changed-28155732-58f2-49db-83c4-a44433c25b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.824 253516 DEBUG nova.compute.manager [req-cbe65252-8608-40c7-9e00-821a6b3aa35f req-1c1f7204-cf3f-4c5f-b281-3175c05b95ff c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Refreshing instance network info cache due to event network-changed-28155732-58f2-49db-83c4-a44433c25b29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.824 253516 DEBUG oslo_concurrency.lockutils [req-cbe65252-8608-40c7-9e00-821a6b3aa35f req-1c1f7204-cf3f-4c5f-b281-3175c05b95ff c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.824 253516 DEBUG oslo_concurrency.lockutils [req-cbe65252-8608-40c7-9e00-821a6b3aa35f req-1c1f7204-cf3f-4c5f-b281-3175c05b95ff c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.824 253516 DEBUG nova.network.neutron [req-cbe65252-8608-40c7-9e00-821a6b3aa35f req-1c1f7204-cf3f-4c5f-b281-3175c05b95ff c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Refreshing network info cache for port 28155732-58f2-49db-83c4-a44433c25b29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.897 253516 DEBUG oslo_concurrency.lockutils [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.897 253516 DEBUG oslo_concurrency.lockutils [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.897 253516 DEBUG oslo_concurrency.lockutils [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.897 253516 DEBUG oslo_concurrency.lockutils [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.897 253516 DEBUG oslo_concurrency.lockutils [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.898 253516 INFO nova.compute.manager [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Terminating instance
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.899 253516 DEBUG nova.compute.manager [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:53:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:37 compute-0 kernel: tap28155732-58 (unregistering): left promiscuous mode
Nov 25 09:53:37 compute-0 NetworkManager[48903]: <info>  [1764064417.9353] device (tap28155732-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:53:37 compute-0 ovn_controller[155020]: 2025-11-25T09:53:37Z|00035|binding|INFO|Releasing lport 28155732-58f2-49db-83c4-a44433c25b29 from this chassis (sb_readonly=0)
Nov 25 09:53:37 compute-0 ovn_controller[155020]: 2025-11-25T09:53:37Z|00036|binding|INFO|Setting lport 28155732-58f2-49db-83c4-a44433c25b29 down in Southbound
Nov 25 09:53:37 compute-0 ovn_controller[155020]: 2025-11-25T09:53:37Z|00037|binding|INFO|Removing iface tap28155732-58 ovn-installed in OVS
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.940 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.943 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:37.949 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:fd:4d 10.100.0.4'], port_security=['fa:16:3e:99:fd:4d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '20ce4aa3-c077-4515-86c2-9c414a3cdd3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '900241cb-44e2-4a1c-b163-ffb52d77319a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fa0b426-4521-4667-8d25-bb8f0339de8c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=28155732-58f2-49db-83c4-a44433c25b29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:53:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:37.950 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 28155732-58f2-49db-83c4-a44433c25b29 in datapath c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0 unbound from our chassis
Nov 25 09:53:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:37.951 164791 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:53:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:37.952 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[efef6a00-1524-4c4a-9fcc-faa0f0121bc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:37 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:37.952 164791 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0 namespace which is not needed anymore
Nov 25 09:53:37 compute-0 nova_compute[253512]: 2025-11-25 09:53:37.965 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:37 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 25 09:53:37 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 12.586s CPU time.
Nov 25 09:53:37 compute-0 systemd-machined[216497]: Machine qemu-1-instance-00000001 terminated.
Nov 25 09:53:38 compute-0 neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0[259035]: [NOTICE]   (259039) : haproxy version is 2.8.14-c23fe91
Nov 25 09:53:38 compute-0 neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0[259035]: [NOTICE]   (259039) : path to executable is /usr/sbin/haproxy
Nov 25 09:53:38 compute-0 neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0[259035]: [ALERT]    (259039) : Current worker (259041) exited with code 143 (Terminated)
Nov 25 09:53:38 compute-0 neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0[259035]: [WARNING]  (259039) : All workers exited. Exiting... (0)
Nov 25 09:53:38 compute-0 systemd[1]: libpod-a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956.scope: Deactivated successfully.
Nov 25 09:53:38 compute-0 conmon[259035]: conmon a29ba9dfdaa3a44a3ae5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956.scope/container/memory.events
Nov 25 09:53:38 compute-0 podman[260071]: 2025-11-25 09:53:38.057210542 +0000 UTC m=+0.033046539 container died a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 09:53:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956-userdata-shm.mount: Deactivated successfully.
Nov 25 09:53:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d5ff7440379f9e6476d5041e482c62dbb06d7f034046dcd2104d71d1713c120-merged.mount: Deactivated successfully.
Nov 25 09:53:38 compute-0 podman[260071]: 2025-11-25 09:53:38.080419865 +0000 UTC m=+0.056255863 container cleanup a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:53:38 compute-0 systemd[1]: libpod-conmon-a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956.scope: Deactivated successfully.
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.111 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.115 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.122 253516 INFO nova.virt.libvirt.driver [-] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Instance destroyed successfully.
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.122 253516 DEBUG nova.objects.instance [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid 20ce4aa3-c077-4515-86c2-9c414a3cdd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:53:38 compute-0 podman[260095]: 2025-11-25 09:53:38.133922185 +0000 UTC m=+0.035688750 container remove a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 09:53:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:38.138 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8ec628-9428-4fda-9b12-a4fa5b70453b]: (4, ('Tue Nov 25 09:53:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0 (a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956)\na29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956\nTue Nov 25 09:53:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0 (a29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956)\na29ba9dfdaa3a44a3ae5489cb25dab65fe0db3efbced559b9ce07643e129d956\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:38.139 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4a43a0-111b-44ad-ae5a-708b0a41bf7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:38.140 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc625dec4-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.141 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:38 compute-0 kernel: tapc625dec4-a0: left promiscuous mode
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.146 253516 DEBUG nova.virt.libvirt.vif [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-453829189',display_name='tempest-TestNetworkBasicOps-server-453829189',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-453829189',id=1,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEfsH4ZhDHRBQ0SXsq2ksZ7miMJturUxNye3bMuHr58eUD8ojFQTCwUl3zvYWeVggLe5N5I44aJcrzxLhc25YSfwBBYUQcIRfap8L0aaDZjPbskXYYx1DYkjthNp2iz2sw==',key_name='tempest-TestNetworkBasicOps-1065734731',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:52:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-v94dznfx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:52:35Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=20ce4aa3-c077-4515-86c2-9c414a3cdd3e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.147 253516 DEBUG nova.network.os_vif_util [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.147 253516 DEBUG nova.network.os_vif_util [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:fd:4d,bridge_name='br-int',has_traffic_filtering=True,id=28155732-58f2-49db-83c4-a44433c25b29,network=Network(c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28155732-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.147 253516 DEBUG os_vif [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:fd:4d,bridge_name='br-int',has_traffic_filtering=True,id=28155732-58f2-49db-83c4-a44433c25b29,network=Network(c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28155732-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.148 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.149 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28155732-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.150 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.152 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:53:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:38.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.158 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.159 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:38.160 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b37e29-1259-4b69-acc2-ca570bfac350]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.161 253516 INFO os_vif [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:fd:4d,bridge_name='br-int',has_traffic_filtering=True,id=28155732-58f2-49db-83c4-a44433c25b29,network=Network(c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28155732-58')
Nov 25 09:53:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:38.171 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[87de729d-49cf-4e7d-8df7-d45726a26c3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:38.171 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb09b8b-5c86-4464-ac4e-1e93f3da75ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:38.184 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2ab896-1343-428c-a8e9-d3cd82387a15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 313861, 'reachable_time': 30877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260132, 'error': None, 'target': 'ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:38 compute-0 systemd[1]: run-netns-ovnmeta\x2dc625dec4\x2da2a0\x2d4fe3\x2d9819\x2d4d44b7a6b4d0.mount: Deactivated successfully.
Nov 25 09:53:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:38.192 164901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:53:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:53:38.193 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[15627b61-eafe-4c4b-8a7c-f2125d118f28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.282 253516 DEBUG nova.compute.manager [req-3043146a-3469-4402-975a-1c3151aeb3da req-1b9e297b-4820-4380-b15a-1bfe3eb4c27e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Received event network-vif-unplugged-28155732-58f2-49db-83c4-a44433c25b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.282 253516 DEBUG oslo_concurrency.lockutils [req-3043146a-3469-4402-975a-1c3151aeb3da req-1b9e297b-4820-4380-b15a-1bfe3eb4c27e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.282 253516 DEBUG oslo_concurrency.lockutils [req-3043146a-3469-4402-975a-1c3151aeb3da req-1b9e297b-4820-4380-b15a-1bfe3eb4c27e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.282 253516 DEBUG oslo_concurrency.lockutils [req-3043146a-3469-4402-975a-1c3151aeb3da req-1b9e297b-4820-4380-b15a-1bfe3eb4c27e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.283 253516 DEBUG nova.compute.manager [req-3043146a-3469-4402-975a-1c3151aeb3da req-1b9e297b-4820-4380-b15a-1bfe3eb4c27e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] No waiting events found dispatching network-vif-unplugged-28155732-58f2-49db-83c4-a44433c25b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.283 253516 DEBUG nova.compute.manager [req-3043146a-3469-4402-975a-1c3151aeb3da req-1b9e297b-4820-4380-b15a-1bfe3eb4c27e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Received event network-vif-unplugged-28155732-58f2-49db-83c4-a44433c25b29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.315 253516 INFO nova.virt.libvirt.driver [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Deleting instance files /var/lib/nova/instances/20ce4aa3-c077-4515-86c2-9c414a3cdd3e_del
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.315 253516 INFO nova.virt.libvirt.driver [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Deletion of /var/lib/nova/instances/20ce4aa3-c077-4515-86c2-9c414a3cdd3e_del complete
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.392 253516 DEBUG nova.virt.libvirt.host [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.393 253516 INFO nova.virt.libvirt.host [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] UEFI support detected
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.394 253516 INFO nova.compute.manager [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Took 0.49 seconds to destroy the instance on the hypervisor.
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.394 253516 DEBUG oslo.service.loopingcall [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.394 253516 DEBUG nova.compute.manager [-] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:53:38 compute-0 nova_compute[253512]: 2025-11-25 09:53:38.394 253516 DEBUG nova.network.neutron [-] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:53:38 compute-0 ceph-mon[74207]: pgmap v672: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 302 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Nov 25 09:53:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:38 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f400c0052c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:39 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.152 253516 DEBUG nova.network.neutron [-] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.163 253516 INFO nova.compute.manager [-] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Took 0.77 seconds to deallocate network for instance.
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.190 253516 DEBUG nova.network.neutron [req-cbe65252-8608-40c7-9e00-821a6b3aa35f req-1c1f7204-cf3f-4c5f-b281-3175c05b95ff c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Updated VIF entry in instance network info cache for port 28155732-58f2-49db-83c4-a44433c25b29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.191 253516 DEBUG nova.network.neutron [req-cbe65252-8608-40c7-9e00-821a6b3aa35f req-1c1f7204-cf3f-4c5f-b281-3175c05b95ff c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Updating instance_info_cache with network_info: [{"id": "28155732-58f2-49db-83c4-a44433c25b29", "address": "fa:16:3e:99:fd:4d", "network": {"id": "c625dec4-a2a0-4fe3-9819-4d44b7a6b4d0", "bridge": "br-int", "label": "tempest-network-smoke--1147128549", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28155732-58", "ovs_interfaceid": "28155732-58f2-49db-83c4-a44433c25b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.199 253516 DEBUG oslo_concurrency.lockutils [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.200 253516 DEBUG oslo_concurrency.lockutils [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.201 253516 DEBUG oslo_concurrency.lockutils [req-cbe65252-8608-40c7-9e00-821a6b3aa35f req-1c1f7204-cf3f-4c5f-b281-3175c05b95ff c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-20ce4aa3-c077-4515-86c2-9c414a3cdd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.242 253516 DEBUG oslo_concurrency.processutils [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:53:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v673: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 24 KiB/s wr, 30 op/s
Nov 25 09:53:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:53:39 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1805383330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.594 253516 DEBUG oslo_concurrency.processutils [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.597 253516 DEBUG nova.compute.provider_tree [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.612 253516 DEBUG nova.scheduler.client.report [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:53:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:39 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c1df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.625 253516 DEBUG oslo_concurrency.lockutils [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.638 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:39 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1805383330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:39.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.859 253516 INFO nova.scheduler.client.report [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance 20ce4aa3-c077-4515-86c2-9c414a3cdd3e
Nov 25 09:53:39 compute-0 nova_compute[253512]: 2025-11-25 09:53:39.923 253516 DEBUG oslo_concurrency.lockutils [None req-2a42be6c-b845-41c1-ad0c-75265b89def8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:40.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:53:40] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 25 09:53:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:53:40] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Nov 25 09:53:40 compute-0 nova_compute[253512]: 2025-11-25 09:53:40.366 253516 DEBUG nova.compute.manager [req-82ee763a-c53c-403c-8b65-7e010376d3ed req-9106cc80-df78-4bf3-88c7-c663592e4c71 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Received event network-vif-plugged-28155732-58f2-49db-83c4-a44433c25b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:53:40 compute-0 nova_compute[253512]: 2025-11-25 09:53:40.366 253516 DEBUG oslo_concurrency.lockutils [req-82ee763a-c53c-403c-8b65-7e010376d3ed req-9106cc80-df78-4bf3-88c7-c663592e4c71 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:53:40 compute-0 nova_compute[253512]: 2025-11-25 09:53:40.366 253516 DEBUG oslo_concurrency.lockutils [req-82ee763a-c53c-403c-8b65-7e010376d3ed req-9106cc80-df78-4bf3-88c7-c663592e4c71 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:53:40 compute-0 nova_compute[253512]: 2025-11-25 09:53:40.367 253516 DEBUG oslo_concurrency.lockutils [req-82ee763a-c53c-403c-8b65-7e010376d3ed req-9106cc80-df78-4bf3-88c7-c663592e4c71 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "20ce4aa3-c077-4515-86c2-9c414a3cdd3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:53:40 compute-0 nova_compute[253512]: 2025-11-25 09:53:40.367 253516 DEBUG nova.compute.manager [req-82ee763a-c53c-403c-8b65-7e010376d3ed req-9106cc80-df78-4bf3-88c7-c663592e4c71 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] No waiting events found dispatching network-vif-plugged-28155732-58f2-49db-83c4-a44433c25b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:53:40 compute-0 nova_compute[253512]: 2025-11-25 09:53:40.367 253516 WARNING nova.compute.manager [req-82ee763a-c53c-403c-8b65-7e010376d3ed req-9106cc80-df78-4bf3-88c7-c663592e4c71 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Received unexpected event network-vif-plugged-28155732-58f2-49db-83c4-a44433c25b29 for instance with vm_state deleted and task_state None.
Nov 25 09:53:40 compute-0 nova_compute[253512]: 2025-11-25 09:53:40.367 253516 DEBUG nova.compute.manager [req-82ee763a-c53c-403c-8b65-7e010376d3ed req-9106cc80-df78-4bf3-88c7-c663592e4c71 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Received event network-vif-deleted-28155732-58f2-49db-83c4-a44433c25b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:53:40 compute-0 nova_compute[253512]: 2025-11-25 09:53:40.367 253516 INFO nova.compute.manager [req-82ee763a-c53c-403c-8b65-7e010376d3ed req-9106cc80-df78-4bf3-88c7-c663592e4c71 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Neutron deleted interface 28155732-58f2-49db-83c4-a44433c25b29; detaching it from the instance and deleting it from the info cache
Nov 25 09:53:40 compute-0 nova_compute[253512]: 2025-11-25 09:53:40.367 253516 DEBUG nova.network.neutron [req-82ee763a-c53c-403c-8b65-7e010376d3ed req-9106cc80-df78-4bf3-88c7-c663592e4c71 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 25 09:53:40 compute-0 nova_compute[253512]: 2025-11-25 09:53:40.369 253516 DEBUG nova.compute.manager [req-82ee763a-c53c-403c-8b65-7e010376d3ed req-9106cc80-df78-4bf3-88c7-c663592e4c71 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Detach interface failed, port_id=28155732-58f2-49db-83c4-a44433c25b29, reason: Instance 20ce4aa3-c077-4515-86c2-9c414a3cdd3e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 09:53:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:40 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:40 compute-0 ceph-mon[74207]: pgmap v673: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 24 KiB/s wr, 30 op/s
Nov 25 09:53:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:41 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f400c0052c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v674: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 25 KiB/s wr, 236 op/s
Nov 25 09:53:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:41 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:41 compute-0 sudo[260162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:53:41 compute-0 sudo[260162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:41 compute-0 sudo[260162]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:41.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:42.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:42 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:42 compute-0 ceph-mon[74207]: pgmap v674: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 25 KiB/s wr, 236 op/s
Nov 25 09:53:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:43 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:43 compute-0 nova_compute[253512]: 2025-11-25 09:53:43.149 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v675: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 11 KiB/s wr, 235 op/s
Nov 25 09:53:43 compute-0 nova_compute[253512]: 2025-11-25 09:53:43.446 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:43 compute-0 nova_compute[253512]: 2025-11-25 09:53:43.545 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:43 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f400c0052c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:43.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:44.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:44 compute-0 nova_compute[253512]: 2025-11-25 09:53:44.640 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:44 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:44 compute-0 ceph-mon[74207]: pgmap v675: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 11 KiB/s wr, 235 op/s
Nov 25 09:53:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:53:44
Nov 25 09:53:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:53:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:53:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.nfs', '.rgw.root', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'volumes']
Nov 25 09:53:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:53:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:53:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:53:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:53:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:53:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:53:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:53:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:53:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:53:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:53:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:53:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:53:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:53:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:53:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:53:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:53:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:53:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:53:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:45 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v676: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 11 KiB/s wr, 235 op/s
Nov 25 09:53:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:45 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:45.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:46.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:46 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f400c0052c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:46 compute-0 ceph-mon[74207]: pgmap v676: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 11 KiB/s wr, 235 op/s
Nov 25 09:53:46 compute-0 podman[260195]: 2025-11-25 09:53:46.96839383 +0000 UTC m=+0.033680214 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 09:53:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:47.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:47.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:47.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:47.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:47 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v677: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 11 KiB/s wr, 235 op/s
Nov 25 09:53:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:47 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:53:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:47.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:53:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:48 compute-0 nova_compute[253512]: 2025-11-25 09:53:48.151 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:48.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:48 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:48 compute-0 ceph-mon[74207]: pgmap v677: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 11 KiB/s wr, 235 op/s
Nov 25 09:53:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:49 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f400c0052c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v678: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 1.2 KiB/s wr, 206 op/s
Nov 25 09:53:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:49 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:49 compute-0 nova_compute[253512]: 2025-11-25 09:53:49.643 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:49 compute-0 sudo[260214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:53:49 compute-0 sudo[260214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:49 compute-0 sudo[260214]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:49 compute-0 sudo[260239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:53:49 compute-0 sudo[260239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:49.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:53:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:53:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:50 compute-0 sudo[260239]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:50.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:53:50] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Nov 25 09:53:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:53:50] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Nov 25 09:53:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:53:50 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:53:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:53:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:53:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:53:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:53:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:53:50 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:53:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:53:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:53:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:53:50 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:53:50 compute-0 sudo[260294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:53:50 compute-0 sudo[260294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:50 compute-0 sudo[260294]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:50 compute-0 sudo[260319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:53:50 compute-0 sudo[260319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:50 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:50 compute-0 ceph-mon[74207]: pgmap v678: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 1.2 KiB/s wr, 206 op/s
Nov 25 09:53:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:53:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:53:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:53:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:53:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:53:50 compute-0 podman[260377]: 2025-11-25 09:53:50.876071788 +0000 UTC m=+0.026665230 container create 3bd30c6804d2fa574cf42d4fac985f9034d7c4a176b38576bac620b3d325650e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 25 09:53:50 compute-0 systemd[1]: Started libpod-conmon-3bd30c6804d2fa574cf42d4fac985f9034d7c4a176b38576bac620b3d325650e.scope.
Nov 25 09:53:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:53:50 compute-0 podman[260377]: 2025-11-25 09:53:50.93546363 +0000 UTC m=+0.086057072 container init 3bd30c6804d2fa574cf42d4fac985f9034d7c4a176b38576bac620b3d325650e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shtern, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:53:50 compute-0 podman[260377]: 2025-11-25 09:53:50.940105221 +0000 UTC m=+0.090698663 container start 3bd30c6804d2fa574cf42d4fac985f9034d7c4a176b38576bac620b3d325650e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shtern, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:53:50 compute-0 podman[260377]: 2025-11-25 09:53:50.941466326 +0000 UTC m=+0.092059789 container attach 3bd30c6804d2fa574cf42d4fac985f9034d7c4a176b38576bac620b3d325650e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:53:50 compute-0 xenodochial_shtern[260390]: 167 167
Nov 25 09:53:50 compute-0 systemd[1]: libpod-3bd30c6804d2fa574cf42d4fac985f9034d7c4a176b38576bac620b3d325650e.scope: Deactivated successfully.
Nov 25 09:53:50 compute-0 podman[260377]: 2025-11-25 09:53:50.944513109 +0000 UTC m=+0.095106552 container died 3bd30c6804d2fa574cf42d4fac985f9034d7c4a176b38576bac620b3d325650e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:53:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1b36ffc41cc4532a6397fb8da2fe7618b5b1fad6c9adc2bbda8a5788006bf4f-merged.mount: Deactivated successfully.
Nov 25 09:53:50 compute-0 podman[260377]: 2025-11-25 09:53:50.960733468 +0000 UTC m=+0.111326910 container remove 3bd30c6804d2fa574cf42d4fac985f9034d7c4a176b38576bac620b3d325650e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shtern, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:53:50 compute-0 podman[260377]: 2025-11-25 09:53:50.865611638 +0000 UTC m=+0.016205110 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:53:50 compute-0 systemd[1]: libpod-conmon-3bd30c6804d2fa574cf42d4fac985f9034d7c4a176b38576bac620b3d325650e.scope: Deactivated successfully.
Nov 25 09:53:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:51 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:51 compute-0 podman[260412]: 2025-11-25 09:53:51.076743646 +0000 UTC m=+0.028023639 container create 9020e452b1deb96bcd1ab6a2aebbdf48ad758bddc8382b0d19e249e1ad986999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:53:51 compute-0 systemd[1]: Started libpod-conmon-9020e452b1deb96bcd1ab6a2aebbdf48ad758bddc8382b0d19e249e1ad986999.scope.
Nov 25 09:53:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5994e8f9fdd3527aef15970e4ceab1c13a524e7efbc6747b915dee29718244f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5994e8f9fdd3527aef15970e4ceab1c13a524e7efbc6747b915dee29718244f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5994e8f9fdd3527aef15970e4ceab1c13a524e7efbc6747b915dee29718244f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5994e8f9fdd3527aef15970e4ceab1c13a524e7efbc6747b915dee29718244f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5994e8f9fdd3527aef15970e4ceab1c13a524e7efbc6747b915dee29718244f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:51 compute-0 podman[260412]: 2025-11-25 09:53:51.131160361 +0000 UTC m=+0.082440354 container init 9020e452b1deb96bcd1ab6a2aebbdf48ad758bddc8382b0d19e249e1ad986999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_burnell, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:53:51 compute-0 podman[260412]: 2025-11-25 09:53:51.136465952 +0000 UTC m=+0.087745935 container start 9020e452b1deb96bcd1ab6a2aebbdf48ad758bddc8382b0d19e249e1ad986999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_burnell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:53:51 compute-0 podman[260412]: 2025-11-25 09:53:51.137637651 +0000 UTC m=+0.088917634 container attach 9020e452b1deb96bcd1ab6a2aebbdf48ad758bddc8382b0d19e249e1ad986999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_burnell, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 09:53:51 compute-0 podman[260412]: 2025-11-25 09:53:51.06509723 +0000 UTC m=+0.016377234 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:53:51 compute-0 intelligent_burnell[260426]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:53:51 compute-0 intelligent_burnell[260426]: --> All data devices are unavailable
Nov 25 09:53:51 compute-0 systemd[1]: libpod-9020e452b1deb96bcd1ab6a2aebbdf48ad758bddc8382b0d19e249e1ad986999.scope: Deactivated successfully.
Nov 25 09:53:51 compute-0 podman[260441]: 2025-11-25 09:53:51.419537768 +0000 UTC m=+0.015125644 container died 9020e452b1deb96bcd1ab6a2aebbdf48ad758bddc8382b0d19e249e1ad986999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 09:53:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5994e8f9fdd3527aef15970e4ceab1c13a524e7efbc6747b915dee29718244f9-merged.mount: Deactivated successfully.
Nov 25 09:53:51 compute-0 podman[260441]: 2025-11-25 09:53:51.440872387 +0000 UTC m=+0.036460263 container remove 9020e452b1deb96bcd1ab6a2aebbdf48ad758bddc8382b0d19e249e1ad986999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 09:53:51 compute-0 systemd[1]: libpod-conmon-9020e452b1deb96bcd1ab6a2aebbdf48ad758bddc8382b0d19e249e1ad986999.scope: Deactivated successfully.
Nov 25 09:53:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v679: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 1.2 KiB/s wr, 206 op/s
Nov 25 09:53:51 compute-0 sudo[260319]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:51 compute-0 sudo[260453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:53:51 compute-0 sudo[260453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:51 compute-0 sudo[260453]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:51 compute-0 sudo[260478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:53:51 compute-0 sudo[260478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:51 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff40021f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:51.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:51 compute-0 podman[260535]: 2025-11-25 09:53:51.838810074 +0000 UTC m=+0.025791762 container create 147ec2ccfc9cf6d9322ec119fa034242cdd18a2010a314b8cbb2cc5e6f1e69da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:53:51 compute-0 systemd[1]: Started libpod-conmon-147ec2ccfc9cf6d9322ec119fa034242cdd18a2010a314b8cbb2cc5e6f1e69da.scope.
Nov 25 09:53:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:53:51 compute-0 podman[260535]: 2025-11-25 09:53:51.88962507 +0000 UTC m=+0.076606777 container init 147ec2ccfc9cf6d9322ec119fa034242cdd18a2010a314b8cbb2cc5e6f1e69da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sinoussi, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:53:51 compute-0 podman[260535]: 2025-11-25 09:53:51.89364086 +0000 UTC m=+0.080622547 container start 147ec2ccfc9cf6d9322ec119fa034242cdd18a2010a314b8cbb2cc5e6f1e69da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sinoussi, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:53:51 compute-0 podman[260535]: 2025-11-25 09:53:51.895008218 +0000 UTC m=+0.081989905 container attach 147ec2ccfc9cf6d9322ec119fa034242cdd18a2010a314b8cbb2cc5e6f1e69da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sinoussi, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 09:53:51 compute-0 eloquent_sinoussi[260548]: 167 167
Nov 25 09:53:51 compute-0 systemd[1]: libpod-147ec2ccfc9cf6d9322ec119fa034242cdd18a2010a314b8cbb2cc5e6f1e69da.scope: Deactivated successfully.
Nov 25 09:53:51 compute-0 podman[260535]: 2025-11-25 09:53:51.897517568 +0000 UTC m=+0.084499265 container died 147ec2ccfc9cf6d9322ec119fa034242cdd18a2010a314b8cbb2cc5e6f1e69da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sinoussi, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:53:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d49aa901c6c1bab20346f2ac9168b9c5eda4b9ab471be73c526e09ef79b24d1-merged.mount: Deactivated successfully.
Nov 25 09:53:51 compute-0 podman[260535]: 2025-11-25 09:53:51.917573687 +0000 UTC m=+0.104555374 container remove 147ec2ccfc9cf6d9322ec119fa034242cdd18a2010a314b8cbb2cc5e6f1e69da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sinoussi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:53:51 compute-0 podman[260535]: 2025-11-25 09:53:51.828088042 +0000 UTC m=+0.015069749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:53:51 compute-0 systemd[1]: libpod-conmon-147ec2ccfc9cf6d9322ec119fa034242cdd18a2010a314b8cbb2cc5e6f1e69da.scope: Deactivated successfully.
Nov 25 09:53:52 compute-0 podman[260570]: 2025-11-25 09:53:52.029087789 +0000 UTC m=+0.026635813 container create 7834f61e3c520408da6f36111a60afa3d0962ae663951878c05f479ec1fc8f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_cohen, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 09:53:52 compute-0 systemd[1]: Started libpod-conmon-7834f61e3c520408da6f36111a60afa3d0962ae663951878c05f479ec1fc8f01.scope.
Nov 25 09:53:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c326b5f8c85b3850fadc3c328b74f893ce993cc574e607179d8a7e43117091e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c326b5f8c85b3850fadc3c328b74f893ce993cc574e607179d8a7e43117091e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c326b5f8c85b3850fadc3c328b74f893ce993cc574e607179d8a7e43117091e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c326b5f8c85b3850fadc3c328b74f893ce993cc574e607179d8a7e43117091e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:52 compute-0 podman[260570]: 2025-11-25 09:53:52.086251963 +0000 UTC m=+0.083800007 container init 7834f61e3c520408da6f36111a60afa3d0962ae663951878c05f479ec1fc8f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 25 09:53:52 compute-0 podman[260570]: 2025-11-25 09:53:52.091708669 +0000 UTC m=+0.089256694 container start 7834f61e3c520408da6f36111a60afa3d0962ae663951878c05f479ec1fc8f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 25 09:53:52 compute-0 podman[260570]: 2025-11-25 09:53:52.093101896 +0000 UTC m=+0.090649920 container attach 7834f61e3c520408da6f36111a60afa3d0962ae663951878c05f479ec1fc8f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_cohen, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:53:52 compute-0 podman[260570]: 2025-11-25 09:53:52.018327435 +0000 UTC m=+0.015875469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:53:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:52.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:52 compute-0 tender_cohen[260583]: {
Nov 25 09:53:52 compute-0 tender_cohen[260583]:     "1": [
Nov 25 09:53:52 compute-0 tender_cohen[260583]:         {
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             "devices": [
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "/dev/loop3"
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             ],
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             "lv_name": "ceph_lv0",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             "lv_size": "21470642176",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             "name": "ceph_lv0",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             "tags": {
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.cluster_name": "ceph",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.crush_device_class": "",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.encrypted": "0",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.osd_id": "1",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.type": "block",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.vdo": "0",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:                 "ceph.with_tpm": "0"
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             },
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             "type": "block",
Nov 25 09:53:52 compute-0 tender_cohen[260583]:             "vg_name": "ceph_vg0"
Nov 25 09:53:52 compute-0 tender_cohen[260583]:         }
Nov 25 09:53:52 compute-0 tender_cohen[260583]:     ]
Nov 25 09:53:52 compute-0 tender_cohen[260583]: }
Nov 25 09:53:52 compute-0 systemd[1]: libpod-7834f61e3c520408da6f36111a60afa3d0962ae663951878c05f479ec1fc8f01.scope: Deactivated successfully.
Nov 25 09:53:52 compute-0 podman[260592]: 2025-11-25 09:53:52.341705494 +0000 UTC m=+0.015508977 container died 7834f61e3c520408da6f36111a60afa3d0962ae663951878c05f479ec1fc8f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_cohen, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:53:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c326b5f8c85b3850fadc3c328b74f893ce993cc574e607179d8a7e43117091e4-merged.mount: Deactivated successfully.
Nov 25 09:53:52 compute-0 podman[260592]: 2025-11-25 09:53:52.364531084 +0000 UTC m=+0.038334557 container remove 7834f61e3c520408da6f36111a60afa3d0962ae663951878c05f479ec1fc8f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:53:52 compute-0 systemd[1]: libpod-conmon-7834f61e3c520408da6f36111a60afa3d0962ae663951878c05f479ec1fc8f01.scope: Deactivated successfully.
Nov 25 09:53:52 compute-0 sudo[260478]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:52 compute-0 sudo[260603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:53:52 compute-0 sudo[260603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:52 compute-0 sudo[260603]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:52 compute-0 sudo[260628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:53:52 compute-0 sudo[260628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:52 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:52 compute-0 ceph-mon[74207]: pgmap v679: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 1.2 KiB/s wr, 206 op/s
Nov 25 09:53:52 compute-0 podman[260685]: 2025-11-25 09:53:52.757224705 +0000 UTC m=+0.028565822 container create dc06e0038fd18b3cd10e9fa6b06991b8e4e80e8fa79008c359838579dd4ccad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_payne, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 25 09:53:52 compute-0 systemd[1]: Started libpod-conmon-dc06e0038fd18b3cd10e9fa6b06991b8e4e80e8fa79008c359838579dd4ccad6.scope.
Nov 25 09:53:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:53:52 compute-0 podman[260685]: 2025-11-25 09:53:52.822768847 +0000 UTC m=+0.094109964 container init dc06e0038fd18b3cd10e9fa6b06991b8e4e80e8fa79008c359838579dd4ccad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_payne, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 25 09:53:52 compute-0 podman[260685]: 2025-11-25 09:53:52.82728421 +0000 UTC m=+0.098625326 container start dc06e0038fd18b3cd10e9fa6b06991b8e4e80e8fa79008c359838579dd4ccad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_payne, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 25 09:53:52 compute-0 podman[260685]: 2025-11-25 09:53:52.829352117 +0000 UTC m=+0.100693234 container attach dc06e0038fd18b3cd10e9fa6b06991b8e4e80e8fa79008c359838579dd4ccad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_payne, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:53:52 compute-0 blissful_payne[260698]: 167 167
Nov 25 09:53:52 compute-0 systemd[1]: libpod-dc06e0038fd18b3cd10e9fa6b06991b8e4e80e8fa79008c359838579dd4ccad6.scope: Deactivated successfully.
Nov 25 09:53:52 compute-0 podman[260685]: 2025-11-25 09:53:52.831373338 +0000 UTC m=+0.102714455 container died dc06e0038fd18b3cd10e9fa6b06991b8e4e80e8fa79008c359838579dd4ccad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_payne, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 25 09:53:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-622f303194d8de85d5d696c452a5eeda4284d628c24e12320733825da8aee091-merged.mount: Deactivated successfully.
Nov 25 09:53:52 compute-0 podman[260685]: 2025-11-25 09:53:52.746024963 +0000 UTC m=+0.017366090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:53:52 compute-0 podman[260685]: 2025-11-25 09:53:52.849323226 +0000 UTC m=+0.120664343 container remove dc06e0038fd18b3cd10e9fa6b06991b8e4e80e8fa79008c359838579dd4ccad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:53:52 compute-0 systemd[1]: libpod-conmon-dc06e0038fd18b3cd10e9fa6b06991b8e4e80e8fa79008c359838579dd4ccad6.scope: Deactivated successfully.
Nov 25 09:53:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:52 compute-0 podman[260719]: 2025-11-25 09:53:52.968029326 +0000 UTC m=+0.028702218 container create 83811648c1ecec2cfaa7e3c23482787812e36223360f93af1caa2537a78ed8b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_blackburn, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:53:52 compute-0 systemd[1]: Started libpod-conmon-83811648c1ecec2cfaa7e3c23482787812e36223360f93af1caa2537a78ed8b8.scope.
Nov 25 09:53:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311a21f421460fbcc4150fb66092efcee52fcc52003812afe04f6cb878c247ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311a21f421460fbcc4150fb66092efcee52fcc52003812afe04f6cb878c247ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311a21f421460fbcc4150fb66092efcee52fcc52003812afe04f6cb878c247ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311a21f421460fbcc4150fb66092efcee52fcc52003812afe04f6cb878c247ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:53:53 compute-0 podman[260719]: 2025-11-25 09:53:53.023230601 +0000 UTC m=+0.083903502 container init 83811648c1ecec2cfaa7e3c23482787812e36223360f93af1caa2537a78ed8b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_blackburn, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:53:53 compute-0 podman[260719]: 2025-11-25 09:53:53.029810284 +0000 UTC m=+0.090483165 container start 83811648c1ecec2cfaa7e3c23482787812e36223360f93af1caa2537a78ed8b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_blackburn, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 09:53:53 compute-0 podman[260719]: 2025-11-25 09:53:53.031006919 +0000 UTC m=+0.091679821 container attach 83811648c1ecec2cfaa7e3c23482787812e36223360f93af1caa2537a78ed8b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_blackburn, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:53:53 compute-0 podman[260719]: 2025-11-25 09:53:52.956265901 +0000 UTC m=+0.016938803 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:53:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:53 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:53 compute-0 nova_compute[253512]: 2025-11-25 09:53:53.118 253516 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064418.117467, 20ce4aa3-c077-4515-86c2-9c414a3cdd3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:53:53 compute-0 nova_compute[253512]: 2025-11-25 09:53:53.118 253516 INFO nova.compute.manager [-] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] VM Stopped (Lifecycle Event)
Nov 25 09:53:53 compute-0 nova_compute[253512]: 2025-11-25 09:53:53.134 253516 DEBUG nova.compute.manager [None req-9e5bc46a-d744-4269-9833-79cbc4f1063c - - - - - -] [instance: 20ce4aa3-c077-4515-86c2-9c414a3cdd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:53:53 compute-0 nova_compute[253512]: 2025-11-25 09:53:53.152 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v680: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:53:53 compute-0 trusting_blackburn[260732]: {}
Nov 25 09:53:53 compute-0 lvm[260808]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:53:53 compute-0 lvm[260808]: VG ceph_vg0 finished
Nov 25 09:53:53 compute-0 systemd[1]: libpod-83811648c1ecec2cfaa7e3c23482787812e36223360f93af1caa2537a78ed8b8.scope: Deactivated successfully.
Nov 25 09:53:53 compute-0 podman[260719]: 2025-11-25 09:53:53.489083067 +0000 UTC m=+0.549755949 container died 83811648c1ecec2cfaa7e3c23482787812e36223360f93af1caa2537a78ed8b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:53:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-311a21f421460fbcc4150fb66092efcee52fcc52003812afe04f6cb878c247ba-merged.mount: Deactivated successfully.
Nov 25 09:53:53 compute-0 podman[260719]: 2025-11-25 09:53:53.513938144 +0000 UTC m=+0.574611025 container remove 83811648c1ecec2cfaa7e3c23482787812e36223360f93af1caa2537a78ed8b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 09:53:53 compute-0 systemd[1]: libpod-conmon-83811648c1ecec2cfaa7e3c23482787812e36223360f93af1caa2537a78ed8b8.scope: Deactivated successfully.
Nov 25 09:53:53 compute-0 sudo[260628]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:53:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:53:53 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:53 compute-0 sudo[260820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:53:53 compute-0 sudo[260820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:53:53 compute-0 sudo[260820]: pam_unix(sudo:session): session closed for user root
Nov 25 09:53:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:53 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:53.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:54.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:54 compute-0 ceph-mon[74207]: pgmap v680: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:53:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:53:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/593949193' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:53:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/593949193' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:53:54 compute-0 nova_compute[253512]: 2025-11-25 09:53:54.644 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:54 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff40021f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:55 compute-0 podman[260847]: 2025-11-25 09:53:55.004599632 +0000 UTC m=+0.062102043 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:53:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:55 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:53:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v681: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:53:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:55 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:55.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:56.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095356 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:53:56 compute-0 ceph-mon[74207]: pgmap v681: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:53:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:56 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:57.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:57.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:57.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:53:57.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:53:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:57 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff400c3c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v682: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:53:57 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2293824702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:53:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:57 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:57.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:53:58 compute-0 nova_compute[253512]: 2025-11-25 09:53:58.153 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:53:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:58.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:53:58 compute-0 ceph-mon[74207]: pgmap v682: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:53:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:58 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:59 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v683: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:53:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:53:59 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:53:59 compute-0 nova_compute[253512]: 2025-11-25 09:53:59.645 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:53:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:53:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:53:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:59.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:53:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:53:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:53:59 compute-0 podman[260876]: 2025-11-25 09:53:59.973417409 +0000 UTC m=+0.040751685 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:54:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:00.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:54:00] "GET /metrics HTTP/1.1" 200 48444 "" "Prometheus/2.51.0"
Nov 25 09:54:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:54:00] "GET /metrics HTTP/1.1" 200 48444 "" "Prometheus/2.51.0"
Nov 25 09:54:00 compute-0 ceph-mon[74207]: pgmap v683: 337 pgs: 337 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:54:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:54:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:00 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:01 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v684: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 09:54:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:01 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:01 compute-0 sudo[260894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:54:01 compute-0 sudo[260894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:01 compute-0 sudo[260894]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:01.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:02.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:02 compute-0 ceph-mon[74207]: pgmap v684: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 09:54:02 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3719102998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:54:02 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/890083397' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:54:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:02 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:03 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:03 compute-0 nova_compute[253512]: 2025-11-25 09:54:03.154 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v685: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 09:54:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:03 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:03.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:04.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:04 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:54:04 compute-0 ceph-mon[74207]: pgmap v685: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 09:54:04 compute-0 nova_compute[253512]: 2025-11-25 09:54:04.647 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:04 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:05 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:54:05.382 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:54:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:54:05.383 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:54:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:54:05.383 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:54:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v686: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 09:54:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:05 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:05.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:06.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:06 compute-0 ceph-mon[74207]: pgmap v686: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 09:54:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:06 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:07.030Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:07 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:07.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:07.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:07.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:07 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:54:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:07 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:54:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v687: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Nov 25 09:54:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:07 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:07.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:08 compute-0 nova_compute[253512]: 2025-11-25 09:54:08.154 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:08.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:08 compute-0 ceph-mon[74207]: pgmap v687: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Nov 25 09:54:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:08 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:09 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v688: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Nov 25 09:54:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:09 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:09 compute-0 nova_compute[253512]: 2025-11-25 09:54:09.649 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:09.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:10.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:54:10] "GET /metrics HTTP/1.1" 200 48444 "" "Prometheus/2.51.0"
Nov 25 09:54:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:54:10] "GET /metrics HTTP/1.1" 200 48444 "" "Prometheus/2.51.0"
Nov 25 09:54:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:10 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:54:10 compute-0 ceph-mon[74207]: pgmap v688: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Nov 25 09:54:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:10 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe8006fe0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:11 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v689: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Nov 25 09:54:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:11 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:11.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:12.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:12 compute-0 ceph-mon[74207]: pgmap v689: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Nov 25 09:54:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:12 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:13 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe80075a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:13 compute-0 nova_compute[253512]: 2025-11-25 09:54:13.155 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v690: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 25 09:54:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:13 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:13.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:14.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:14 compute-0 ceph-mon[74207]: pgmap v690: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 25 09:54:14 compute-0 nova_compute[253512]: 2025-11-25 09:54:14.651 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:14 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:54:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:54:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:54:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:54:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:54:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:54:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:54:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:54:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:15 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v691: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 25 09:54:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:54:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:15 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:15.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:16.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095416 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:54:16 compute-0 ceph-mon[74207]: pgmap v691: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 25 09:54:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:16 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:17.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:17.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:17.043Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:17.043Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4018004fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v692: 337 pgs: 337 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Nov 25 09:54:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:17 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe80075a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:17 compute-0 ovn_controller[155020]: 2025-11-25T09:54:17Z|00038|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 25 09:54:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:17.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:17 compute-0 podman[260937]: 2025-11-25 09:54:17.974789245 +0000 UTC m=+0.037225067 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 09:54:18 compute-0 nova_compute[253512]: 2025-11-25 09:54:18.156 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:18.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:18 compute-0 ceph-mon[74207]: pgmap v692: 337 pgs: 337 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Nov 25 09:54:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/737909963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4246624918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3923914555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:18 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:19 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v693: 337 pgs: 337 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 09:54:19 compute-0 nova_compute[253512]: 2025-11-25 09:54:19.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:19 compute-0 nova_compute[253512]: 2025-11-25 09:54:19.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:19 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4018004fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:19 compute-0 nova_compute[253512]: 2025-11-25 09:54:19.652 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4109264677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2627551505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:19.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:20.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:54:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 25 09:54:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:54:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 25 09:54:20 compute-0 nova_compute[253512]: 2025-11-25 09:54:20.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:20 compute-0 nova_compute[253512]: 2025-11-25 09:54:20.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:54:20 compute-0 nova_compute[253512]: 2025-11-25 09:54:20.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:20 compute-0 nova_compute[253512]: 2025-11-25 09:54:20.488 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:54:20 compute-0 nova_compute[253512]: 2025-11-25 09:54:20.489 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:54:20 compute-0 nova_compute[253512]: 2025-11-25 09:54:20.489 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:54:20 compute-0 nova_compute[253512]: 2025-11-25 09:54:20.489 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:54:20 compute-0 nova_compute[253512]: 2025-11-25 09:54:20.489 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:54:20 compute-0 ceph-mon[74207]: pgmap v693: 337 pgs: 337 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 09:54:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:20 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe80075a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:54:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694358255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:20 compute-0 nova_compute[253512]: 2025-11-25 09:54:20.835 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:54:21 compute-0 nova_compute[253512]: 2025-11-25 09:54:21.036 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:54:21 compute-0 nova_compute[253512]: 2025-11-25 09:54:21.037 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4656MB free_disk=59.94289016723633GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:54:21 compute-0 nova_compute[253512]: 2025-11-25 09:54:21.038 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:54:21 compute-0 nova_compute[253512]: 2025-11-25 09:54:21.038 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:54:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:21 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:21 compute-0 nova_compute[253512]: 2025-11-25 09:54:21.171 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:54:21 compute-0 nova_compute[253512]: 2025-11-25 09:54:21.172 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:54:21 compute-0 nova_compute[253512]: 2025-11-25 09:54:21.194 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:54:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v694: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 09:54:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:54:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/556558394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:21 compute-0 nova_compute[253512]: 2025-11-25 09:54:21.539 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:54:21 compute-0 nova_compute[253512]: 2025-11-25 09:54:21.543 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:54:21 compute-0 nova_compute[253512]: 2025-11-25 09:54:21.556 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:54:21 compute-0 nova_compute[253512]: 2025-11-25 09:54:21.575 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:54:21 compute-0 nova_compute[253512]: 2025-11-25 09:54:21.576 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:54:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:21 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3694358255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/556558394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:21 compute-0 sudo[261002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:54:21 compute-0 sudo[261002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:21 compute-0 sudo[261002]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:21.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:22.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:22 compute-0 nova_compute[253512]: 2025-11-25 09:54:22.571 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:22 compute-0 nova_compute[253512]: 2025-11-25 09:54:22.572 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:22 compute-0 nova_compute[253512]: 2025-11-25 09:54:22.586 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:22 compute-0 nova_compute[253512]: 2025-11-25 09:54:22.586 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:54:22 compute-0 nova_compute[253512]: 2025-11-25 09:54:22.586 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:54:22 compute-0 nova_compute[253512]: 2025-11-25 09:54:22.595 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:54:22 compute-0 nova_compute[253512]: 2025-11-25 09:54:22.595 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:22 compute-0 nova_compute[253512]: 2025-11-25 09:54:22.595 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:22 compute-0 nova_compute[253512]: 2025-11-25 09:54:22.595 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:54:22 compute-0 ceph-mon[74207]: pgmap v694: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 09:54:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:22 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4018006790 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:23 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fe80075a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:23 compute-0 nova_compute[253512]: 2025-11-25 09:54:23.157 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v695: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:54:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:23 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ff4001380 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:23.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:24.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:24 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:54:24.240 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:54:24 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:54:24.241 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:54:24 compute-0 nova_compute[253512]: 2025-11-25 09:54:24.241 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:24 compute-0 nova_compute[253512]: 2025-11-25 09:54:24.654 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:24 compute-0 ceph-mon[74207]: pgmap v695: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:54:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:24 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:25 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40140c2b00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v696: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:54:25 compute-0 kernel: ganesha.nfsd[259918]: segfault at 50 ip 00007f40a410332e sp 00007f4071ffa210 error 4 in libntirpc.so.5.8[7f40a40e8000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 25 09:54:25 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:54:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[258321]: 25/11/2025 09:54:25 : epoch 69257c4b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4018006790 fd 42 proxy ignored for local
Nov 25 09:54:25 compute-0 systemd[1]: Started Process Core Dump (PID 261030/UID 0).
Nov 25 09:54:25 compute-0 podman[261031]: 2025-11-25 09:54:25.753458116 +0000 UTC m=+0.073778124 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:54:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:25.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:26.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:26 compute-0 ceph-mon[74207]: pgmap v696: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:54:26 compute-0 systemd-coredump[261032]: Process 258325 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 58:
                                                    #0  0x00007f40a410332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:54:26 compute-0 systemd[1]: systemd-coredump@10-261030-0.service: Deactivated successfully.
Nov 25 09:54:26 compute-0 systemd[1]: systemd-coredump@10-261030-0.service: Consumed 1.079s CPU time.
Nov 25 09:54:26 compute-0 podman[261061]: 2025-11-25 09:54:26.834619501 +0000 UTC m=+0.019242858 container died 791bbab5845f78c874800a3bbdef657807d13c08bd378e252e2c6bda4d70e108 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:54:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9298a2fcc9e2df78612b3903844701153db4ec3da0115abb4947af9b7b847515-merged.mount: Deactivated successfully.
Nov 25 09:54:26 compute-0 podman[261061]: 2025-11-25 09:54:26.852971438 +0000 UTC m=+0.037594774 container remove 791bbab5845f78c874800a3bbdef657807d13c08bd378e252e2c6bda4d70e108 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:54:26 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:54:26 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:54:26 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Consumed 1.113s CPU time.
Nov 25 09:54:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:27.032Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:27.043Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:27.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:27.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v697: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 09:54:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:27.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:28 compute-0 nova_compute[253512]: 2025-11-25 09:54:28.159 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:28.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:28 compute-0 ceph-mon[74207]: pgmap v697: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 09:54:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v698: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 12 KiB/s wr, 0 op/s
Nov 25 09:54:29 compute-0 nova_compute[253512]: 2025-11-25 09:54:29.657 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:29.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:54:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:54:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:30.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:54:30] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Nov 25 09:54:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:54:30] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Nov 25 09:54:30 compute-0 ceph-mon[74207]: pgmap v698: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 12 KiB/s wr, 0 op/s
Nov 25 09:54:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:54:30 compute-0 podman[261097]: 2025-11-25 09:54:30.974268129 +0000 UTC m=+0.038810156 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 09:54:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v699: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 14 KiB/s wr, 1 op/s
Nov 25 09:54:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095431 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:54:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:31.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:32.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:32 compute-0 ceph-mon[74207]: pgmap v699: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 14 KiB/s wr, 1 op/s
Nov 25 09:54:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:33 compute-0 nova_compute[253512]: 2025-11-25 09:54:33.160 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v700: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 14 KiB/s wr, 0 op/s
Nov 25 09:54:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:33.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:34.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:34 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:54:34.243 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:54:34 compute-0 nova_compute[253512]: 2025-11-25 09:54:34.659 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:34 compute-0 ceph-mon[74207]: pgmap v700: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 14 KiB/s wr, 0 op/s
Nov 25 09:54:34 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 09:54:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v701: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 14 KiB/s wr, 0 op/s
Nov 25 09:54:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:35.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:36.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:36 compute-0 ceph-mon[74207]: pgmap v701: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 14 KiB/s wr, 0 op/s
Nov 25 09:54:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:37.032Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:37.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:37.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:37.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:37 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 11.
Nov 25 09:54:37 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:54:37 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Consumed 1.113s CPU time.
Nov 25 09:54:37 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:54:37 compute-0 podman[261160]: 2025-11-25 09:54:37.315756415 +0000 UTC m=+0.027833532 container create 92abdd7ddcfd98bfca609264ddff192ba15ebd8ef437a97627439fd7613d197e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cc3388d57a0cc03f92c0aea142f9de623e5873f7847ea5559dd4bdbdb96171/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cc3388d57a0cc03f92c0aea142f9de623e5873f7847ea5559dd4bdbdb96171/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cc3388d57a0cc03f92c0aea142f9de623e5873f7847ea5559dd4bdbdb96171/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cc3388d57a0cc03f92c0aea142f9de623e5873f7847ea5559dd4bdbdb96171/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:37 compute-0 podman[261160]: 2025-11-25 09:54:37.356497328 +0000 UTC m=+0.068574465 container init 92abdd7ddcfd98bfca609264ddff192ba15ebd8ef437a97627439fd7613d197e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:54:37 compute-0 podman[261160]: 2025-11-25 09:54:37.360780714 +0000 UTC m=+0.072857830 container start 92abdd7ddcfd98bfca609264ddff192ba15ebd8ef437a97627439fd7613d197e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:54:37 compute-0 bash[261160]: 92abdd7ddcfd98bfca609264ddff192ba15ebd8ef437a97627439fd7613d197e
Nov 25 09:54:37 compute-0 podman[261160]: 2025-11-25 09:54:37.304016433 +0000 UTC m=+0.016093570 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:54:37 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:54:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:37 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:54:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:37 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:54:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:37 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:54:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:37 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:54:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:37 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:54:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:37 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:54:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:37 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:54:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:37 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:54:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v702: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 15 KiB/s wr, 1 op/s
Nov 25 09:54:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:37.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:38 compute-0 nova_compute[253512]: 2025-11-25 09:54:38.162 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:38.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:38 compute-0 ceph-mon[74207]: pgmap v702: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 15 KiB/s wr, 1 op/s
Nov 25 09:54:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v703: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Nov 25 09:54:39 compute-0 nova_compute[253512]: 2025-11-25 09:54:39.660 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:39 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4111200644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:39.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:40.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:54:40] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Nov 25 09:54:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:54:40] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Nov 25 09:54:40 compute-0 ceph-mon[74207]: pgmap v703: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Nov 25 09:54:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v704: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.7 KiB/s wr, 30 op/s
Nov 25 09:54:41 compute-0 sudo[261219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:54:41 compute-0 sudo[261219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:41 compute-0 sudo[261219]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:41.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:42.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:42 compute-0 ceph-mon[74207]: pgmap v704: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.7 KiB/s wr, 30 op/s
Nov 25 09:54:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:43 compute-0 nova_compute[253512]: 2025-11-25 09:54:43.164 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:43 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:54:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:43 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:54:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v705: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.7 KiB/s wr, 30 op/s
Nov 25 09:54:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:43.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:44.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:44 compute-0 nova_compute[253512]: 2025-11-25 09:54:44.662 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:44 compute-0 ceph-mon[74207]: pgmap v705: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.7 KiB/s wr, 30 op/s
Nov 25 09:54:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:54:44
Nov 25 09:54:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:54:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:54:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'backups', '.mgr', 'images', '.rgw.root', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms']
Nov 25 09:54:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:54:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:54:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:54:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:54:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:54:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:54:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:54:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:54:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:54:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:54:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:54:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:54:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:54:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:54:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:54:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:54:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:54:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:54:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:54:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v706: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.7 KiB/s wr, 30 op/s
Nov 25 09:54:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:54:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 09:54:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 09:54:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:46.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:46 compute-0 ceph-mon[74207]: pgmap v706: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.7 KiB/s wr, 30 op/s
Nov 25 09:54:46 compute-0 ceph-mgr[74476]: [devicehealth INFO root] Check health
Nov 25 09:54:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:47.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:47.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:47.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:47.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v707: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 32 op/s
Nov 25 09:54:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:48 compute-0 nova_compute[253512]: 2025-11-25 09:54:48.165 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:48.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:48 compute-0 ceph-mon[74207]: pgmap v707: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 32 op/s
Nov 25 09:54:48 compute-0 podman[261251]: 2025-11-25 09:54:48.971400424 +0000 UTC m=+0.038260729 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:54:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v708: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 31 op/s
Nov 25 09:54:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:49 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:49 compute-0 nova_compute[253512]: 2025-11-25 09:54:49.663 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:49.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:50.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:54:50] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Nov 25 09:54:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:54:50] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Nov 25 09:54:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:50 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27dc002ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:50 compute-0 ceph-mon[74207]: pgmap v708: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 31 op/s
Nov 25 09:54:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:51 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v709: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Nov 25 09:54:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095451 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:54:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:51 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e00020a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:51.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:52.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:52 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:52 compute-0 ceph-mon[74207]: pgmap v709: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Nov 25 09:54:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:53 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:53 compute-0 nova_compute[253512]: 2025-11-25 09:54:53.166 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v710: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:54:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:53 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:53 compute-0 sudo[261288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:54:53 compute-0 sudo[261288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:53 compute-0 sudo[261288]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:53 compute-0 sudo[261313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:54:53 compute-0 sudo[261313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:53.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:54 compute-0 sudo[261313]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:54:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:54.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:54:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:54:54 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:54:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:54:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:54:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:54:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:54:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:54:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:54:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:54:54 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:54:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:54:54 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:54:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:54:54 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:54:54 compute-0 sudo[261368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:54:54 compute-0 sudo[261368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:54 compute-0 sudo[261368]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:54 compute-0 sudo[261393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:54:54 compute-0 sudo[261393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:54 compute-0 nova_compute[253512]: 2025-11-25 09:54:54.663 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:54 compute-0 podman[261449]: 2025-11-25 09:54:54.680336574 +0000 UTC m=+0.027637251 container create e99627a408448425ca9a68482d2703a81768a66edae2b5595afbd7a1730520a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:54:54 compute-0 systemd[1]: Started libpod-conmon-e99627a408448425ca9a68482d2703a81768a66edae2b5595afbd7a1730520a8.scope.
Nov 25 09:54:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:54:54 compute-0 podman[261449]: 2025-11-25 09:54:54.749097379 +0000 UTC m=+0.096398077 container init e99627a408448425ca9a68482d2703a81768a66edae2b5595afbd7a1730520a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:54:54 compute-0 podman[261449]: 2025-11-25 09:54:54.75343163 +0000 UTC m=+0.100732297 container start e99627a408448425ca9a68482d2703a81768a66edae2b5595afbd7a1730520a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_rosalind, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:54:54 compute-0 podman[261449]: 2025-11-25 09:54:54.754496558 +0000 UTC m=+0.101797245 container attach e99627a408448425ca9a68482d2703a81768a66edae2b5595afbd7a1730520a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_rosalind, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:54:54 compute-0 dreamy_rosalind[261462]: 167 167
Nov 25 09:54:54 compute-0 systemd[1]: libpod-e99627a408448425ca9a68482d2703a81768a66edae2b5595afbd7a1730520a8.scope: Deactivated successfully.
Nov 25 09:54:54 compute-0 podman[261449]: 2025-11-25 09:54:54.757443833 +0000 UTC m=+0.104744510 container died e99627a408448425ca9a68482d2703a81768a66edae2b5595afbd7a1730520a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:54:54 compute-0 podman[261449]: 2025-11-25 09:54:54.668986898 +0000 UTC m=+0.016287576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:54:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:54 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-16272d1429a91d8a07b3d50020f726b3e488e49311b56035035cdd7cea91094a-merged.mount: Deactivated successfully.
Nov 25 09:54:54 compute-0 podman[261449]: 2025-11-25 09:54:54.7759816 +0000 UTC m=+0.123282278 container remove e99627a408448425ca9a68482d2703a81768a66edae2b5595afbd7a1730520a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_rosalind, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 09:54:54 compute-0 systemd[1]: libpod-conmon-e99627a408448425ca9a68482d2703a81768a66edae2b5595afbd7a1730520a8.scope: Deactivated successfully.
Nov 25 09:54:54 compute-0 ceph-mon[74207]: pgmap v710: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:54:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3647834050' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:54:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3647834050' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:54:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:54:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:54:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:54:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:54:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:54:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:54:54 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:54:54 compute-0 podman[261484]: 2025-11-25 09:54:54.900240659 +0000 UTC m=+0.029555148 container create d509f619904214727f873699ca839808a19dfaaaa7d19edd8e0eb4bb59d7490d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:54:54 compute-0 systemd[1]: Started libpod-conmon-d509f619904214727f873699ca839808a19dfaaaa7d19edd8e0eb4bb59d7490d.scope.
Nov 25 09:54:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df6c1e59c80ef937b063d025ba59d34db5dd6cb335184c8564f7fc2e029b6d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df6c1e59c80ef937b063d025ba59d34db5dd6cb335184c8564f7fc2e029b6d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df6c1e59c80ef937b063d025ba59d34db5dd6cb335184c8564f7fc2e029b6d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df6c1e59c80ef937b063d025ba59d34db5dd6cb335184c8564f7fc2e029b6d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df6c1e59c80ef937b063d025ba59d34db5dd6cb335184c8564f7fc2e029b6d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:54 compute-0 podman[261484]: 2025-11-25 09:54:54.953161654 +0000 UTC m=+0.082476152 container init d509f619904214727f873699ca839808a19dfaaaa7d19edd8e0eb4bb59d7490d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:54:54 compute-0 podman[261484]: 2025-11-25 09:54:54.958163373 +0000 UTC m=+0.087477861 container start d509f619904214727f873699ca839808a19dfaaaa7d19edd8e0eb4bb59d7490d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_perlman, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 25 09:54:54 compute-0 podman[261484]: 2025-11-25 09:54:54.959392931 +0000 UTC m=+0.088707419 container attach d509f619904214727f873699ca839808a19dfaaaa7d19edd8e0eb4bb59d7490d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_perlman, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:54:54 compute-0 podman[261484]: 2025-11-25 09:54:54.888031052 +0000 UTC m=+0.017345550 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:54:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:55 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e0002ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:55 compute-0 trusting_perlman[261497]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:54:55 compute-0 trusting_perlman[261497]: --> All data devices are unavailable
Nov 25 09:54:55 compute-0 systemd[1]: libpod-d509f619904214727f873699ca839808a19dfaaaa7d19edd8e0eb4bb59d7490d.scope: Deactivated successfully.
Nov 25 09:54:55 compute-0 podman[261484]: 2025-11-25 09:54:55.218414118 +0000 UTC m=+0.347728616 container died d509f619904214727f873699ca839808a19dfaaaa7d19edd8e0eb4bb59d7490d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_perlman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:54:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9df6c1e59c80ef937b063d025ba59d34db5dd6cb335184c8564f7fc2e029b6d9-merged.mount: Deactivated successfully.
Nov 25 09:54:55 compute-0 podman[261484]: 2025-11-25 09:54:55.239911403 +0000 UTC m=+0.369225891 container remove d509f619904214727f873699ca839808a19dfaaaa7d19edd8e0eb4bb59d7490d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:54:55 compute-0 systemd[1]: libpod-conmon-d509f619904214727f873699ca839808a19dfaaaa7d19edd8e0eb4bb59d7490d.scope: Deactivated successfully.
Nov 25 09:54:55 compute-0 sudo[261393]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:55 compute-0 sudo[261523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:54:55 compute-0 sudo[261523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:55 compute-0 sudo[261523]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:55 compute-0 sudo[261548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:54:55 compute-0 sudo[261548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:54:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v711: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:54:55 compute-0 podman[261603]: 2025-11-25 09:54:55.645334602 +0000 UTC m=+0.027590543 container create 786c7a058ebbb00d11cbe0169288f7f1ded1ec8f5ac6f03c5d32e5b5b725079a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_faraday, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:54:55 compute-0 systemd[1]: Started libpod-conmon-786c7a058ebbb00d11cbe0169288f7f1ded1ec8f5ac6f03c5d32e5b5b725079a.scope.
Nov 25 09:54:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:55 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e80025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:54:55 compute-0 podman[261603]: 2025-11-25 09:54:55.682954482 +0000 UTC m=+0.065210432 container init 786c7a058ebbb00d11cbe0169288f7f1ded1ec8f5ac6f03c5d32e5b5b725079a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_faraday, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 09:54:55 compute-0 podman[261603]: 2025-11-25 09:54:55.688037253 +0000 UTC m=+0.070293185 container start 786c7a058ebbb00d11cbe0169288f7f1ded1ec8f5ac6f03c5d32e5b5b725079a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:54:55 compute-0 focused_faraday[261616]: 167 167
Nov 25 09:54:55 compute-0 podman[261603]: 2025-11-25 09:54:55.690267929 +0000 UTC m=+0.072523870 container attach 786c7a058ebbb00d11cbe0169288f7f1ded1ec8f5ac6f03c5d32e5b5b725079a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_faraday, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:54:55 compute-0 systemd[1]: libpod-786c7a058ebbb00d11cbe0169288f7f1ded1ec8f5ac6f03c5d32e5b5b725079a.scope: Deactivated successfully.
Nov 25 09:54:55 compute-0 podman[261603]: 2025-11-25 09:54:55.691575944 +0000 UTC m=+0.073831876 container died 786c7a058ebbb00d11cbe0169288f7f1ded1ec8f5ac6f03c5d32e5b5b725079a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 09:54:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f62d18ff3ac49c94dce22e53a8203f0248329a4a2399f4ea8a03294dc0a869e7-merged.mount: Deactivated successfully.
Nov 25 09:54:55 compute-0 podman[261603]: 2025-11-25 09:54:55.712707792 +0000 UTC m=+0.094963722 container remove 786c7a058ebbb00d11cbe0169288f7f1ded1ec8f5ac6f03c5d32e5b5b725079a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_faraday, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:54:55 compute-0 podman[261603]: 2025-11-25 09:54:55.633062217 +0000 UTC m=+0.015318168 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:54:55 compute-0 systemd[1]: libpod-conmon-786c7a058ebbb00d11cbe0169288f7f1ded1ec8f5ac6f03c5d32e5b5b725079a.scope: Deactivated successfully.
Nov 25 09:54:55 compute-0 podman[261639]: 2025-11-25 09:54:55.83610186 +0000 UTC m=+0.027783127 container create 76341ae4e11a13e054a4575682325b511202c6b0a244972d24e4215873a7997b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:54:55 compute-0 systemd[1]: Started libpod-conmon-76341ae4e11a13e054a4575682325b511202c6b0a244972d24e4215873a7997b.scope.
Nov 25 09:54:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5a8ac5601efd71f494db7ee1838802057d04ed354e581ca4c0aaac889851b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5a8ac5601efd71f494db7ee1838802057d04ed354e581ca4c0aaac889851b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5a8ac5601efd71f494db7ee1838802057d04ed354e581ca4c0aaac889851b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5a8ac5601efd71f494db7ee1838802057d04ed354e581ca4c0aaac889851b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:55 compute-0 podman[261639]: 2025-11-25 09:54:55.882292185 +0000 UTC m=+0.073973453 container init 76341ae4e11a13e054a4575682325b511202c6b0a244972d24e4215873a7997b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_goodall, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 09:54:55 compute-0 podman[261639]: 2025-11-25 09:54:55.887334892 +0000 UTC m=+0.079016159 container start 76341ae4e11a13e054a4575682325b511202c6b0a244972d24e4215873a7997b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:54:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:55.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:55 compute-0 podman[261639]: 2025-11-25 09:54:55.891261445 +0000 UTC m=+0.082942701 container attach 76341ae4e11a13e054a4575682325b511202c6b0a244972d24e4215873a7997b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_goodall, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:54:55 compute-0 podman[261639]: 2025-11-25 09:54:55.825963026 +0000 UTC m=+0.017644303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:54:55 compute-0 podman[261650]: 2025-11-25 09:54:55.942490339 +0000 UTC m=+0.087671887 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 09:54:56 compute-0 zealous_goodall[261658]: {
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:     "1": [
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:         {
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             "devices": [
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "/dev/loop3"
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             ],
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             "lv_name": "ceph_lv0",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             "lv_size": "21470642176",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             "name": "ceph_lv0",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             "tags": {
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.cluster_name": "ceph",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.crush_device_class": "",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.encrypted": "0",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.osd_id": "1",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.type": "block",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.vdo": "0",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:                 "ceph.with_tpm": "0"
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             },
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             "type": "block",
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:             "vg_name": "ceph_vg0"
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:         }
Nov 25 09:54:56 compute-0 zealous_goodall[261658]:     ]
Nov 25 09:54:56 compute-0 zealous_goodall[261658]: }
Nov 25 09:54:56 compute-0 systemd[1]: libpod-76341ae4e11a13e054a4575682325b511202c6b0a244972d24e4215873a7997b.scope: Deactivated successfully.
Nov 25 09:54:56 compute-0 podman[261687]: 2025-11-25 09:54:56.147276234 +0000 UTC m=+0.017322448 container died 76341ae4e11a13e054a4575682325b511202c6b0a244972d24e4215873a7997b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_goodall, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:54:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-de5a8ac5601efd71f494db7ee1838802057d04ed354e581ca4c0aaac889851b3-merged.mount: Deactivated successfully.
Nov 25 09:54:56 compute-0 podman[261687]: 2025-11-25 09:54:56.167359924 +0000 UTC m=+0.037406128 container remove 76341ae4e11a13e054a4575682325b511202c6b0a244972d24e4215873a7997b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_goodall, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:54:56 compute-0 systemd[1]: libpod-conmon-76341ae4e11a13e054a4575682325b511202c6b0a244972d24e4215873a7997b.scope: Deactivated successfully.
Nov 25 09:54:56 compute-0 sudo[261548]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:56.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:56 compute-0 sudo[261700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:54:56 compute-0 sudo[261700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:56 compute-0 sudo[261700]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:56 compute-0 sudo[261725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:54:56 compute-0 sudo[261725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:56 compute-0 podman[261781]: 2025-11-25 09:54:56.553298842 +0000 UTC m=+0.025096702 container create 1868b796c05bf22c03c79746cdbea811e76feee99daa0dbc60e48cdb10ead3fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_faraday, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:54:56 compute-0 systemd[1]: Started libpod-conmon-1868b796c05bf22c03c79746cdbea811e76feee99daa0dbc60e48cdb10ead3fe.scope.
Nov 25 09:54:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:54:56 compute-0 podman[261781]: 2025-11-25 09:54:56.594832048 +0000 UTC m=+0.066629920 container init 1868b796c05bf22c03c79746cdbea811e76feee99daa0dbc60e48cdb10ead3fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 25 09:54:56 compute-0 podman[261781]: 2025-11-25 09:54:56.598916118 +0000 UTC m=+0.070713978 container start 1868b796c05bf22c03c79746cdbea811e76feee99daa0dbc60e48cdb10ead3fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_faraday, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Nov 25 09:54:56 compute-0 podman[261781]: 2025-11-25 09:54:56.600506396 +0000 UTC m=+0.072304277 container attach 1868b796c05bf22c03c79746cdbea811e76feee99daa0dbc60e48cdb10ead3fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_faraday, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:54:56 compute-0 jolly_faraday[261794]: 167 167
Nov 25 09:54:56 compute-0 systemd[1]: libpod-1868b796c05bf22c03c79746cdbea811e76feee99daa0dbc60e48cdb10ead3fe.scope: Deactivated successfully.
Nov 25 09:54:56 compute-0 podman[261781]: 2025-11-25 09:54:56.601979633 +0000 UTC m=+0.073777494 container died 1868b796c05bf22c03c79746cdbea811e76feee99daa0dbc60e48cdb10ead3fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 09:54:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-709c2cf6fc88dc86448114863e1e2b150f1444e7bfde8da6196bda39cf269821-merged.mount: Deactivated successfully.
Nov 25 09:54:56 compute-0 podman[261781]: 2025-11-25 09:54:56.620191186 +0000 UTC m=+0.091989046 container remove 1868b796c05bf22c03c79746cdbea811e76feee99daa0dbc60e48cdb10ead3fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_faraday, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:54:56 compute-0 podman[261781]: 2025-11-25 09:54:56.543297899 +0000 UTC m=+0.015095770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:54:56 compute-0 systemd[1]: libpod-conmon-1868b796c05bf22c03c79746cdbea811e76feee99daa0dbc60e48cdb10ead3fe.scope: Deactivated successfully.
Nov 25 09:54:56 compute-0 podman[261816]: 2025-11-25 09:54:56.734954802 +0000 UTC m=+0.025608126 container create d8b595941d8f8e83b22b94b59634ce54292af108d0e2aa6dfbdd85b357455dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_feistel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:54:56 compute-0 systemd[1]: Started libpod-conmon-d8b595941d8f8e83b22b94b59634ce54292af108d0e2aa6dfbdd85b357455dfb.scope.
Nov 25 09:54:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:56 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec009900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d90156682d5cba17df48f433b199e3e266fbb08c7df9d11838d4f194778833/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d90156682d5cba17df48f433b199e3e266fbb08c7df9d11838d4f194778833/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d90156682d5cba17df48f433b199e3e266fbb08c7df9d11838d4f194778833/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d90156682d5cba17df48f433b199e3e266fbb08c7df9d11838d4f194778833/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:54:56 compute-0 podman[261816]: 2025-11-25 09:54:56.787477717 +0000 UTC m=+0.078131042 container init d8b595941d8f8e83b22b94b59634ce54292af108d0e2aa6dfbdd85b357455dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_feistel, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:54:56 compute-0 podman[261816]: 2025-11-25 09:54:56.792927542 +0000 UTC m=+0.083580876 container start d8b595941d8f8e83b22b94b59634ce54292af108d0e2aa6dfbdd85b357455dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_feistel, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:54:56 compute-0 podman[261816]: 2025-11-25 09:54:56.79422645 +0000 UTC m=+0.084879774 container attach d8b595941d8f8e83b22b94b59634ce54292af108d0e2aa6dfbdd85b357455dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_feistel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:54:56 compute-0 ceph-mon[74207]: pgmap v711: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:54:56 compute-0 podman[261816]: 2025-11-25 09:54:56.724700542 +0000 UTC m=+0.015353886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:54:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:57.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:57.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:57.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:54:57.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:54:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:57 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec009900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:57 compute-0 gracious_feistel[261829]: {}
Nov 25 09:54:57 compute-0 lvm[261905]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:54:57 compute-0 lvm[261905]: VG ceph_vg0 finished
Nov 25 09:54:57 compute-0 systemd[1]: libpod-d8b595941d8f8e83b22b94b59634ce54292af108d0e2aa6dfbdd85b357455dfb.scope: Deactivated successfully.
Nov 25 09:54:57 compute-0 podman[261907]: 2025-11-25 09:54:57.277443512 +0000 UTC m=+0.016212765 container died d8b595941d8f8e83b22b94b59634ce54292af108d0e2aa6dfbdd85b357455dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_feistel, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:54:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2d90156682d5cba17df48f433b199e3e266fbb08c7df9d11838d4f194778833-merged.mount: Deactivated successfully.
Nov 25 09:54:57 compute-0 podman[261907]: 2025-11-25 09:54:57.305865262 +0000 UTC m=+0.044634495 container remove d8b595941d8f8e83b22b94b59634ce54292af108d0e2aa6dfbdd85b357455dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_feistel, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 09:54:57 compute-0 systemd[1]: libpod-conmon-d8b595941d8f8e83b22b94b59634ce54292af108d0e2aa6dfbdd85b357455dfb.scope: Deactivated successfully.
Nov 25 09:54:57 compute-0 sudo[261725]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:54:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:54:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:54:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:54:57 compute-0 sudo[261919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:54:57 compute-0 sudo[261919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:54:57 compute-0 sudo[261919]: pam_unix(sudo:session): session closed for user root
Nov 25 09:54:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v712: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:54:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:57 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e00034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:57 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1074496672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:54:57 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:54:57 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:54:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:57.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:54:58 compute-0 nova_compute[253512]: 2025-11-25 09:54:58.167 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:58.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:58 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e00034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:58 compute-0 ceph-mon[74207]: pgmap v712: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 25 09:54:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:59 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00a610 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v713: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:54:59 compute-0 nova_compute[253512]: 2025-11-25 09:54:59.665 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:54:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:54:59 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e8002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:54:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:54:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:54:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:59.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:54:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:54:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:00.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:55:00] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Nov 25 09:55:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:55:00] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Nov 25 09:55:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:00 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00a610 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:00 compute-0 ceph-mon[74207]: pgmap v713: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:55:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:01 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e00034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v714: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:55:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:01 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e00034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1480591424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:55:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:01.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:01 compute-0 sudo[261949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:55:01 compute-0 sudo[261949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:01 compute-0 sudo[261949]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:01 compute-0 podman[261974]: 2025-11-25 09:55:01.958951815 +0000 UTC m=+0.036498017 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:55:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:02.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:02 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e8002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:02 compute-0 ceph-mon[74207]: pgmap v714: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:55:02 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3723789558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:55:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:03 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00b320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:03 compute-0 nova_compute[253512]: 2025-11-25 09:55:03.169 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v715: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:55:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:03 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e00034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:03.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:04.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:04 compute-0 nova_compute[253512]: 2025-11-25 09:55:04.667 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:04 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e00034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:04 compute-0 ceph-mon[74207]: pgmap v715: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:55:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:05 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e8002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:05.383 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:05.384 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:05.384 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v716: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:55:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:05 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e8002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:05.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:06.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:06 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e00034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:06 compute-0 ceph-mon[74207]: pgmap v716: 337 pgs: 337 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:55:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:07.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:07.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:07.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:07.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:07 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00bf40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v717: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:55:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:07 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00bf40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:07.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:08 compute-0 nova_compute[253512]: 2025-11-25 09:55:08.171 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:08.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:08 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00bf40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:08 compute-0 ceph-mon[74207]: pgmap v717: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:55:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:09 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00bf40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v718: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:55:09 compute-0 nova_compute[253512]: 2025-11-25 09:55:09.670 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:09 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00bf40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.870025) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509870046, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2158, "num_deletes": 251, "total_data_size": 4180669, "memory_usage": 4246816, "flush_reason": "Manual Compaction"}
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509878667, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4077759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19972, "largest_seqno": 22128, "table_properties": {"data_size": 4068072, "index_size": 6117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20087, "raw_average_key_size": 20, "raw_value_size": 4048608, "raw_average_value_size": 4101, "num_data_blocks": 267, "num_entries": 987, "num_filter_entries": 987, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064306, "oldest_key_time": 1764064306, "file_creation_time": 1764064509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8664 microseconds, and 5517 cpu microseconds.
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.878690) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4077759 bytes OK
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.878701) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.879641) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.879650) EVENT_LOG_v1 {"time_micros": 1764064509879647, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.879660) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4171924, prev total WAL file size 4171924, number of live WAL files 2.
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.880323) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(3982KB)], [44(11MB)]
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509880354, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 16563396, "oldest_snapshot_seqno": -1}
Nov 25 09:55:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:09.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5457 keys, 14404122 bytes, temperature: kUnknown
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509910544, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 14404122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14365634, "index_size": 23722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 137453, "raw_average_key_size": 25, "raw_value_size": 14264928, "raw_average_value_size": 2614, "num_data_blocks": 980, "num_entries": 5457, "num_filter_entries": 5457, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764064509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.910686) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14404122 bytes
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.912646) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 547.7 rd, 476.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 11.9 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 5981, records dropped: 524 output_compression: NoCompression
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.912659) EVENT_LOG_v1 {"time_micros": 1764064509912653, "job": 22, "event": "compaction_finished", "compaction_time_micros": 30243, "compaction_time_cpu_micros": 20647, "output_level": 6, "num_output_files": 1, "total_output_size": 14404122, "num_input_records": 5981, "num_output_records": 5457, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509913401, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509915121, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.880270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.915159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.915161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.915162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.915163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:55:09 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:55:09.915164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:55:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:55:10] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Nov 25 09:55:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:55:10] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Nov 25 09:55:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:10.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:10 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e8004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:10 compute-0 ceph-mon[74207]: pgmap v718: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:55:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:11 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00bf40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v719: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:55:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:11 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00bf40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:11.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:11 compute-0 ceph-mon[74207]: pgmap v719: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:55:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:12.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:12 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e0004da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:13 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e8004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:13 compute-0 nova_compute[253512]: 2025-11-25 09:55:13.173 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v720: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:55:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:13 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e8004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:13.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:14.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:14 compute-0 ceph-mon[74207]: pgmap v720: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:55:14 compute-0 nova_compute[253512]: 2025-11-25 09:55:14.671 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:14 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00cdd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:55:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:55:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:55:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:55:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:55:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:55:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:55:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:15 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00cdd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v721: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:55:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:15 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00cdd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:15.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:16.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:16 compute-0 ceph-mon[74207]: pgmap v721: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:55:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:16 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e8004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:17.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:17.043Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:17.043Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:17.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:17 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00cdd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v722: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 25 09:55:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:17 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e0004da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:17.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:18 compute-0 nova_compute[253512]: 2025-11-25 09:55:18.175 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:18.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:18 compute-0 ceph-mon[74207]: pgmap v722: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 25 09:55:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4116000474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3968335134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:18 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27ec00cdd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:19 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e8004f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:19 compute-0 nova_compute[253512]: 2025-11-25 09:55:19.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v723: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3493657564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:19 compute-0 nova_compute[253512]: 2025-11-25 09:55:19.671 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:19 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e8004f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:19.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:19 compute-0 podman[262011]: 2025-11-25 09:55:19.977583116 +0000 UTC m=+0.037868105 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:55:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:55:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 25 09:55:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:55:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 25 09:55:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:20.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:20 compute-0 ceph-mon[74207]: pgmap v723: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1215892976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:20 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e0004da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:21 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27e0004da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:55:21 compute-0 nova_compute[253512]: 2025-11-25 09:55:21.468 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:21 compute-0 nova_compute[253512]: 2025-11-25 09:55:21.470 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:21 compute-0 nova_compute[253512]: 2025-11-25 09:55:21.470 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v724: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:21 compute-0 nova_compute[253512]: 2025-11-25 09:55:21.488 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:21 compute-0 nova_compute[253512]: 2025-11-25 09:55:21.489 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:21 compute-0 nova_compute[253512]: 2025-11-25 09:55:21.489 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:21 compute-0 nova_compute[253512]: 2025-11-25 09:55:21.489 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:55:21 compute-0 nova_compute[253512]: 2025-11-25 09:55:21.489 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[261172]: 25/11/2025 09:55:21 : epoch 69257cdd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2800003820 fd 39 proxy ignored for local
Nov 25 09:55:21 compute-0 kernel: ganesha.nfsd[262027]: segfault at 50 ip 00007f289b1dd32e sp 00007f285f7fd210 error 4 in libntirpc.so.5.8[7f289b1c2000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 25 09:55:21 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:55:21 compute-0 systemd[1]: Started Process Core Dump (PID 262048/UID 0).
Nov 25 09:55:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:55:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3590733530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:21 compute-0 nova_compute[253512]: 2025-11-25 09:55:21.829 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:21.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:21 compute-0 sudo[262054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:55:21 compute-0 sudo[262054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:21 compute-0 sudo[262054]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:22 compute-0 nova_compute[253512]: 2025-11-25 09:55:22.080 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:55:22 compute-0 nova_compute[253512]: 2025-11-25 09:55:22.081 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4645MB free_disk=59.942752838134766GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:55:22 compute-0 nova_compute[253512]: 2025-11-25 09:55:22.081 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:22 compute-0 nova_compute[253512]: 2025-11-25 09:55:22.081 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:22 compute-0 nova_compute[253512]: 2025-11-25 09:55:22.133 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:55:22 compute-0 nova_compute[253512]: 2025-11-25 09:55:22.134 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:55:22 compute-0 nova_compute[253512]: 2025-11-25 09:55:22.148 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:22.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:55:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842284857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:22 compute-0 nova_compute[253512]: 2025-11-25 09:55:22.498 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:22 compute-0 nova_compute[253512]: 2025-11-25 09:55:22.502 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:55:22 compute-0 nova_compute[253512]: 2025-11-25 09:55:22.516 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:55:22 compute-0 nova_compute[253512]: 2025-11-25 09:55:22.517 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:55:22 compute-0 nova_compute[253512]: 2025-11-25 09:55:22.518 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:22 compute-0 ceph-mon[74207]: pgmap v724: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3590733530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1842284857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:22 compute-0 systemd-coredump[262050]: Process 261176 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 56:
                                                    #0  0x00007f289b1dd32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:55:22 compute-0 systemd[1]: systemd-coredump@11-262048-0.service: Deactivated successfully.
Nov 25 09:55:22 compute-0 systemd[1]: systemd-coredump@11-262048-0.service: Consumed 1.009s CPU time.
Nov 25 09:55:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:22 compute-0 podman[262105]: 2025-11-25 09:55:22.962745733 +0000 UTC m=+0.017596641 container died 92abdd7ddcfd98bfca609264ddff192ba15ebd8ef437a97627439fd7613d197e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:55:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-49cc3388d57a0cc03f92c0aea142f9de623e5873f7847ea5559dd4bdbdb96171-merged.mount: Deactivated successfully.
Nov 25 09:55:22 compute-0 podman[262105]: 2025-11-25 09:55:22.981462405 +0000 UTC m=+0.036313304 container remove 92abdd7ddcfd98bfca609264ddff192ba15ebd8ef437a97627439fd7613d197e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 25 09:55:22 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:55:23 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:55:23 compute-0 nova_compute[253512]: 2025-11-25 09:55:23.176 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v725: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:23 compute-0 nova_compute[253512]: 2025-11-25 09:55:23.518 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:23 compute-0 nova_compute[253512]: 2025-11-25 09:55:23.518 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:23 compute-0 nova_compute[253512]: 2025-11-25 09:55:23.519 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:23 compute-0 nova_compute[253512]: 2025-11-25 09:55:23.519 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:55:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:23.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:24.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:24 compute-0 nova_compute[253512]: 2025-11-25 09:55:24.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:24 compute-0 nova_compute[253512]: 2025-11-25 09:55:24.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:55:24 compute-0 nova_compute[253512]: 2025-11-25 09:55:24.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:55:24 compute-0 nova_compute[253512]: 2025-11-25 09:55:24.483 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:55:24 compute-0 nova_compute[253512]: 2025-11-25 09:55:24.483 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:55:24 compute-0 nova_compute[253512]: 2025-11-25 09:55:24.673 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:24 compute-0 ceph-mon[74207]: pgmap v725: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=cleanup t=2025-11-25T09:55:25.18147998Z level=info msg="Completed cleanup jobs" duration=2.614188ms
Nov 25 09:55:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=grafana.update.checker t=2025-11-25T09:55:25.271470649Z level=info msg="Update check succeeded" duration=43.799577ms
Nov 25 09:55:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=plugins.update.checker t=2025-11-25T09:55:25.272756231Z level=info msg="Update check succeeded" duration=34.2646ms
Nov 25 09:55:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v726: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:25.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:26.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:26 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:26.689 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:55:26 compute-0 nova_compute[253512]: 2025-11-25 09:55:26.689 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:26 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:26.690 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:55:26 compute-0 ceph-mon[74207]: pgmap v726: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:26 compute-0 podman[262141]: 2025-11-25 09:55:26.994852561 +0000 UTC m=+0.058710384 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:55:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:27.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:27.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:27.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:27.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v727: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095527 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:55:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:27.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:28 compute-0 nova_compute[253512]: 2025-11-25 09:55:28.177 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:28.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:28.691 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:28 compute-0 ceph-mon[74207]: pgmap v727: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 09:55:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v728: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 16 KiB/s wr, 0 op/s
Nov 25 09:55:29 compute-0 nova_compute[253512]: 2025-11-25 09:55:29.675 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:29.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:55:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:55:30] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 25 09:55:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:55:30] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 25 09:55:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:30.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.424 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.425 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.435 253516 DEBUG nova.compute.manager [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.500 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.500 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.505 253516 DEBUG nova.virt.hardware [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.505 253516 INFO nova.compute.claims [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.576 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:30 compute-0 ceph-mon[74207]: pgmap v728: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 16 KiB/s wr, 0 op/s
Nov 25 09:55:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:55:30 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1846692248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.909 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.913 253516 DEBUG nova.compute.provider_tree [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.926 253516 DEBUG nova.scheduler.client.report [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.940 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.941 253516 DEBUG nova.compute.manager [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.970 253516 DEBUG nova.compute.manager [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.971 253516 DEBUG nova.network.neutron [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.981 253516 INFO nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:55:30 compute-0 nova_compute[253512]: 2025-11-25 09:55:30.990 253516 DEBUG nova.compute.manager [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.048 253516 DEBUG nova.compute.manager [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.049 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.049 253516 INFO nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Creating image(s)
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.065 253516 DEBUG nova.storage.rbd_utils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.082 253516 DEBUG nova.storage.rbd_utils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.099 253516 DEBUG nova.storage.rbd_utils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.101 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.147 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.148 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.148 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.149 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.166 253516 DEBUG nova.storage.rbd_utils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.168 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.182 253516 DEBUG nova.policy [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.299 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.347 253516 DEBUG nova.storage.rbd_utils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image 05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.404 253516 DEBUG nova.objects.instance [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid 05a3fbe7-a832-4fb6-ad57-bfdd256afc57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.420 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.420 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Ensure instance console log exists: /var/lib/nova/instances/05a3fbe7-a832-4fb6-ad57-bfdd256afc57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.420 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.420 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.421 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v729: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 16 KiB/s wr, 1 op/s
Nov 25 09:55:31 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1846692248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:55:31 compute-0 nova_compute[253512]: 2025-11-25 09:55:31.802 253516 DEBUG nova.network.neutron [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Successfully created port: 60c6f2c0-ef30-4463-9cb7-83925fe7d146 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:55:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:31.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:32.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:32 compute-0 ceph-mon[74207]: pgmap v729: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 16 KiB/s wr, 1 op/s
Nov 25 09:55:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:32 compute-0 podman[262358]: 2025-11-25 09:55:32.976519677 +0000 UTC m=+0.041681564 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:55:33 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 12.
Nov 25 09:55:33 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:55:33 compute-0 nova_compute[253512]: 2025-11-25 09:55:33.178 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:33 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:55:33 compute-0 podman[262412]: 2025-11-25 09:55:33.321873293 +0000 UTC m=+0.027541008 container create a984ad0109bdc29fe2f01737d192d6222f6e3da649483f064a38cb6604dc6e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 25 09:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febab89d2b0e738e0582178a91e806792f37089b55c62b3151de4d8b631c5d01/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febab89d2b0e738e0582178a91e806792f37089b55c62b3151de4d8b631c5d01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febab89d2b0e738e0582178a91e806792f37089b55c62b3151de4d8b631c5d01/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febab89d2b0e738e0582178a91e806792f37089b55c62b3151de4d8b631c5d01/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:33 compute-0 podman[262412]: 2025-11-25 09:55:33.364654759 +0000 UTC m=+0.070322474 container init a984ad0109bdc29fe2f01737d192d6222f6e3da649483f064a38cb6604dc6e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 09:55:33 compute-0 podman[262412]: 2025-11-25 09:55:33.369061726 +0000 UTC m=+0.074729431 container start a984ad0109bdc29fe2f01737d192d6222f6e3da649483f064a38cb6604dc6e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:55:33 compute-0 bash[262412]: a984ad0109bdc29fe2f01737d192d6222f6e3da649483f064a38cb6604dc6e5a
Nov 25 09:55:33 compute-0 podman[262412]: 2025-11-25 09:55:33.310523238 +0000 UTC m=+0.016190963 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:55:33 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:55:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:33 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:55:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:33 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:55:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:33 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:55:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:33 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:55:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:33 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:55:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:33 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:55:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:33 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:55:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:33 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:55:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v730: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.7 KiB/s wr, 0 op/s
Nov 25 09:55:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:33.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:34.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:34 compute-0 nova_compute[253512]: 2025-11-25 09:55:34.677 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:34 compute-0 nova_compute[253512]: 2025-11-25 09:55:34.702 253516 DEBUG nova.network.neutron [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Successfully updated port: 60c6f2c0-ef30-4463-9cb7-83925fe7d146 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:55:34 compute-0 nova_compute[253512]: 2025-11-25 09:55:34.721 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-05a3fbe7-a832-4fb6-ad57-bfdd256afc57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:55:34 compute-0 nova_compute[253512]: 2025-11-25 09:55:34.721 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-05a3fbe7-a832-4fb6-ad57-bfdd256afc57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:55:34 compute-0 nova_compute[253512]: 2025-11-25 09:55:34.721 253516 DEBUG nova.network.neutron [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:55:34 compute-0 nova_compute[253512]: 2025-11-25 09:55:34.785 253516 DEBUG nova.compute.manager [req-e11ac6bf-8d01-4724-8f4f-891bb2b45774 req-3f64dada-bbf4-4253-9f24-114665687837 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Received event network-changed-60c6f2c0-ef30-4463-9cb7-83925fe7d146 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:55:34 compute-0 nova_compute[253512]: 2025-11-25 09:55:34.785 253516 DEBUG nova.compute.manager [req-e11ac6bf-8d01-4724-8f4f-891bb2b45774 req-3f64dada-bbf4-4253-9f24-114665687837 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Refreshing instance network info cache due to event network-changed-60c6f2c0-ef30-4463-9cb7-83925fe7d146. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:55:34 compute-0 nova_compute[253512]: 2025-11-25 09:55:34.786 253516 DEBUG oslo_concurrency.lockutils [req-e11ac6bf-8d01-4724-8f4f-891bb2b45774 req-3f64dada-bbf4-4253-9f24-114665687837 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-05a3fbe7-a832-4fb6-ad57-bfdd256afc57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:55:34 compute-0 ceph-mon[74207]: pgmap v730: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.7 KiB/s wr, 0 op/s
Nov 25 09:55:34 compute-0 nova_compute[253512]: 2025-11-25 09:55:34.883 253516 DEBUG nova.network.neutron [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:55:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v731: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.7 KiB/s wr, 0 op/s
Nov 25 09:55:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:35.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:36.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.724 253516 DEBUG nova.network.neutron [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Updating instance_info_cache with network_info: [{"id": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "address": "fa:16:3e:e5:53:15", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c6f2c0-ef", "ovs_interfaceid": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.738 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-05a3fbe7-a832-4fb6-ad57-bfdd256afc57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.738 253516 DEBUG nova.compute.manager [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Instance network_info: |[{"id": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "address": "fa:16:3e:e5:53:15", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c6f2c0-ef", "ovs_interfaceid": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.739 253516 DEBUG oslo_concurrency.lockutils [req-e11ac6bf-8d01-4724-8f4f-891bb2b45774 req-3f64dada-bbf4-4253-9f24-114665687837 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-05a3fbe7-a832-4fb6-ad57-bfdd256afc57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.739 253516 DEBUG nova.network.neutron [req-e11ac6bf-8d01-4724-8f4f-891bb2b45774 req-3f64dada-bbf4-4253-9f24-114665687837 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Refreshing network info cache for port 60c6f2c0-ef30-4463-9cb7-83925fe7d146 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.741 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Start _get_guest_xml network_info=[{"id": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "address": "fa:16:3e:e5:53:15", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c6f2c0-ef", "ovs_interfaceid": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'encrypted': False, 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.744 253516 WARNING nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.749 253516 DEBUG nova.virt.libvirt.host [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.749 253516 DEBUG nova.virt.libvirt.host [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.753 253516 DEBUG nova.virt.libvirt.host [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.753 253516 DEBUG nova.virt.libvirt.host [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.753 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.753 253516 DEBUG nova.virt.hardware [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.754 253516 DEBUG nova.virt.hardware [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.754 253516 DEBUG nova.virt.hardware [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.754 253516 DEBUG nova.virt.hardware [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.754 253516 DEBUG nova.virt.hardware [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.754 253516 DEBUG nova.virt.hardware [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.755 253516 DEBUG nova.virt.hardware [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.755 253516 DEBUG nova.virt.hardware [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.755 253516 DEBUG nova.virt.hardware [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.755 253516 DEBUG nova.virt.hardware [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.755 253516 DEBUG nova.virt.hardware [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:55:36 compute-0 nova_compute[253512]: 2025-11-25 09:55:36.757 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:36 compute-0 ceph-mon[74207]: pgmap v731: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.7 KiB/s wr, 0 op/s
Nov 25 09:55:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:37.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:37.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:37.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:37.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:55:37 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239035204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.096 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.117 253516 DEBUG nova.storage.rbd_utils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.119 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:55:37 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444976167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.453 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.454 253516 DEBUG nova.virt.libvirt.vif [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:55:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1838931960',display_name='tempest-TestNetworkBasicOps-server-1838931960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1838931960',id=5,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5JfKrSzRW3uIA2kb/BdlbYK+cNdQ7voGr9f4HHF406651oaxxcr/stjsonis3f8jxE5v2FRbpiulcVQUAQR/seSqNnq9wzlhs6aIac2P8DyWAsbG/pyb1H9xyF++6iRA==',key_name='tempest-TestNetworkBasicOps-1692779666',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-0pvttemy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:55:31Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=05a3fbe7-a832-4fb6-ad57-bfdd256afc57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "address": "fa:16:3e:e5:53:15", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c6f2c0-ef", "ovs_interfaceid": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.455 253516 DEBUG nova.network.os_vif_util [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "address": "fa:16:3e:e5:53:15", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c6f2c0-ef", "ovs_interfaceid": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.455 253516 DEBUG nova.network.os_vif_util [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:53:15,bridge_name='br-int',has_traffic_filtering=True,id=60c6f2c0-ef30-4463-9cb7-83925fe7d146,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c6f2c0-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.456 253516 DEBUG nova.objects.instance [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid 05a3fbe7-a832-4fb6-ad57-bfdd256afc57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.469 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:55:37 compute-0 nova_compute[253512]:   <uuid>05a3fbe7-a832-4fb6-ad57-bfdd256afc57</uuid>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   <name>instance-00000005</name>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   <memory>131072</memory>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   <vcpu>1</vcpu>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   <metadata>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <nova:name>tempest-TestNetworkBasicOps-server-1838931960</nova:name>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <nova:creationTime>2025-11-25 09:55:36</nova:creationTime>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <nova:flavor name="m1.nano">
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <nova:memory>128</nova:memory>
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <nova:disk>1</nova:disk>
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <nova:swap>0</nova:swap>
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       </nova:flavor>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <nova:owner>
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       </nova:owner>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <nova:ports>
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <nova:port uuid="60c6f2c0-ef30-4463-9cb7-83925fe7d146">
Nov 25 09:55:37 compute-0 nova_compute[253512]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:         </nova:port>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       </nova:ports>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     </nova:instance>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   </metadata>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   <sysinfo type="smbios">
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <system>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <entry name="serial">05a3fbe7-a832-4fb6-ad57-bfdd256afc57</entry>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <entry name="uuid">05a3fbe7-a832-4fb6-ad57-bfdd256afc57</entry>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     </system>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   </sysinfo>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   <os>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <boot dev="hd"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <smbios mode="sysinfo"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   </os>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   <features>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <acpi/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <apic/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <vmcoreinfo/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   </features>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   <clock offset="utc">
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <timer name="hpet" present="no"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   </clock>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   <cpu mode="host-model" match="exact">
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   </cpu>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   <devices>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <disk type="network" device="disk">
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <driver type="raw" cache="none"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <source protocol="rbd" name="vms/05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk">
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       </source>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <auth username="openstack">
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <target dev="vda" bus="virtio"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <disk type="network" device="cdrom">
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <driver type="raw" cache="none"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <source protocol="rbd" name="vms/05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk.config">
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       </source>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <auth username="openstack">
Nov 25 09:55:37 compute-0 nova_compute[253512]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <target dev="sda" bus="sata"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <interface type="ethernet">
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <mac address="fa:16:3e:e5:53:15"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <model type="virtio"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <mtu size="1442"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <target dev="tap60c6f2c0-ef"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     </interface>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <serial type="pty">
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <log file="/var/lib/nova/instances/05a3fbe7-a832-4fb6-ad57-bfdd256afc57/console.log" append="off"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     </serial>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <video>
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <model type="virtio"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     </video>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <input type="tablet" bus="usb"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <rng model="virtio">
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     </rng>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <controller type="usb" index="0"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     <memballoon model="virtio">
Nov 25 09:55:37 compute-0 nova_compute[253512]:       <stats period="10"/>
Nov 25 09:55:37 compute-0 nova_compute[253512]:     </memballoon>
Nov 25 09:55:37 compute-0 nova_compute[253512]:   </devices>
Nov 25 09:55:37 compute-0 nova_compute[253512]: </domain>
Nov 25 09:55:37 compute-0 nova_compute[253512]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.470 253516 DEBUG nova.compute.manager [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Preparing to wait for external event network-vif-plugged-60c6f2c0-ef30-4463-9cb7-83925fe7d146 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.471 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.471 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.471 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.472 253516 DEBUG nova.virt.libvirt.vif [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:55:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1838931960',display_name='tempest-TestNetworkBasicOps-server-1838931960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1838931960',id=5,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5JfKrSzRW3uIA2kb/BdlbYK+cNdQ7voGr9f4HHF406651oaxxcr/stjsonis3f8jxE5v2FRbpiulcVQUAQR/seSqNnq9wzlhs6aIac2P8DyWAsbG/pyb1H9xyF++6iRA==',key_name='tempest-TestNetworkBasicOps-1692779666',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-0pvttemy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:55:31Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=05a3fbe7-a832-4fb6-ad57-bfdd256afc57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "address": "fa:16:3e:e5:53:15", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c6f2c0-ef", "ovs_interfaceid": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.472 253516 DEBUG nova.network.os_vif_util [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "address": "fa:16:3e:e5:53:15", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c6f2c0-ef", "ovs_interfaceid": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.473 253516 DEBUG nova.network.os_vif_util [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:53:15,bridge_name='br-int',has_traffic_filtering=True,id=60c6f2c0-ef30-4463-9cb7-83925fe7d146,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c6f2c0-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.473 253516 DEBUG os_vif [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:53:15,bridge_name='br-int',has_traffic_filtering=True,id=60c6f2c0-ef30-4463-9cb7-83925fe7d146,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c6f2c0-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.474 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.474 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.475 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.477 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.477 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60c6f2c0-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.478 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60c6f2c0-ef, col_values=(('external_ids', {'iface-id': '60c6f2c0-ef30-4463-9cb7-83925fe7d146', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:53:15', 'vm-uuid': '05a3fbe7-a832-4fb6-ad57-bfdd256afc57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.479 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:37 compute-0 NetworkManager[48903]: <info>  [1764064537.4796] manager: (tap60c6f2c0-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 25 09:55:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v732: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.482 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.484 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.485 253516 INFO os_vif [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:53:15,bridge_name='br-int',has_traffic_filtering=True,id=60c6f2c0-ef30-4463-9cb7-83925fe7d146,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c6f2c0-ef')
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.519 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.519 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.519 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:e5:53:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.520 253516 INFO nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Using config drive
Nov 25 09:55:37 compute-0 nova_compute[253512]: 2025-11-25 09:55:37.537 253516 DEBUG nova.storage.rbd_utils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:55:37 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/239035204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:55:37 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/444976167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:55:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:37.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.228 253516 INFO nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Creating config drive at /var/lib/nova/instances/05a3fbe7-a832-4fb6-ad57-bfdd256afc57/disk.config
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.233 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/05a3fbe7-a832-4fb6-ad57-bfdd256afc57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_lo3my6m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:38.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.350 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/05a3fbe7-a832-4fb6-ad57-bfdd256afc57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_lo3my6m" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.371 253516 DEBUG nova.storage.rbd_utils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.374 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/05a3fbe7-a832-4fb6-ad57-bfdd256afc57/disk.config 05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.454 253516 DEBUG oslo_concurrency.processutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/05a3fbe7-a832-4fb6-ad57-bfdd256afc57/disk.config 05a3fbe7-a832-4fb6-ad57-bfdd256afc57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.454 253516 INFO nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Deleting local config drive /var/lib/nova/instances/05a3fbe7-a832-4fb6-ad57-bfdd256afc57/disk.config because it was imported into RBD.
Nov 25 09:55:38 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 25 09:55:38 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 25 09:55:38 compute-0 kernel: tap60c6f2c0-ef: entered promiscuous mode
Nov 25 09:55:38 compute-0 ovn_controller[155020]: 2025-11-25T09:55:38Z|00039|binding|INFO|Claiming lport 60c6f2c0-ef30-4463-9cb7-83925fe7d146 for this chassis.
Nov 25 09:55:38 compute-0 ovn_controller[155020]: 2025-11-25T09:55:38Z|00040|binding|INFO|60c6f2c0-ef30-4463-9cb7-83925fe7d146: Claiming fa:16:3e:e5:53:15 10.100.0.6
Nov 25 09:55:38 compute-0 NetworkManager[48903]: <info>  [1764064538.5185] manager: (tap60c6f2c0-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.518 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:38 compute-0 systemd-udevd[262624]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:55:38 compute-0 NetworkManager[48903]: <info>  [1764064538.5538] device (tap60c6f2c0-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:55:38 compute-0 NetworkManager[48903]: <info>  [1764064538.5546] device (tap60c6f2c0-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.597 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:38 compute-0 ovn_controller[155020]: 2025-11-25T09:55:38Z|00041|binding|INFO|Setting lport 60c6f2c0-ef30-4463-9cb7-83925fe7d146 ovn-installed in OVS
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.604 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:38 compute-0 ovn_controller[155020]: 2025-11-25T09:55:38Z|00042|binding|INFO|Setting lport 60c6f2c0-ef30-4463-9cb7-83925fe7d146 up in Southbound
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.658 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:53:15 10.100.0.6'], port_security=['fa:16:3e:e5:53:15 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '05a3fbe7-a832-4fb6-ad57-bfdd256afc57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26e2b899-e667-4c76-b6b1-5134ece3a582', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6eca7baa-8b78-4122-aef9-182609bf4892, chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=60c6f2c0-ef30-4463-9cb7-83925fe7d146) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.659 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 60c6f2c0-ef30-4463-9cb7-83925fe7d146 in datapath ed91d6bf-56aa-4e17-a7ca-48f04cae081d bound to our chassis
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.660 164791 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed91d6bf-56aa-4e17-a7ca-48f04cae081d
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.667 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[cd76f735-848a-4233-95c0-3c8abba90f56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.668 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped91d6bf-51 in ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.669 258952 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped91d6bf-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.669 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[292968ad-124a-4bd6-aea6-03111f613453]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.670 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[a8506366-9ff4-4ae6-b291-66579a016648]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.680 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[cb315ba1-e951-4751-b060-d4b9261294e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 systemd-machined[216497]: New machine qemu-2-instance-00000005.
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.701 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[fbfa7df9-6231-4027-8290-b136b8f0cc4f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.723 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[75fa06cd-82ac-4fdd-bc06-52b90bbaa898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 NetworkManager[48903]: <info>  [1764064538.7270] manager: (taped91d6bf-50): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.727 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8a6c30-7ff5-4043-af4a-6a0009b32639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.752 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c15ba0-931e-43bf-a136-6f913c7c1eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.755 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[e6692c4a-944b-4b43-b1d5-4cbaaf084f5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 NetworkManager[48903]: <info>  [1764064538.7717] device (taped91d6bf-50): carrier: link connected
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.774 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[19abf818-f1e4-4e18-b16c-45f947ff0b31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.786 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb4dc27-f2a6-44c7-b278-8ec51887690d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped91d6bf-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:18:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 331934, 'reachable_time': 18601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262652, 'error': None, 'target': 'ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.796 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[6d49d90a-5edb-45d2-8c40-bfec3366787a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:1845'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 331934, 'tstamp': 331934}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262655, 'error': None, 'target': 'ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.807 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[d2997fd9-684d-47ba-8cfc-235796dff3a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped91d6bf-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:18:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 331934, 'reachable_time': 18601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262656, 'error': None, 'target': 'ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 ceph-mon[74207]: pgmap v732: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.827 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[822401f0-9d29-4740-a638-ecdb40fafbe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.865 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[1eefa46b-720b-4574-9424-a5ac745d79a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.866 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped91d6bf-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.867 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.867 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped91d6bf-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:38 compute-0 NetworkManager[48903]: <info>  [1764064538.8690] manager: (taped91d6bf-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 25 09:55:38 compute-0 kernel: taped91d6bf-50: entered promiscuous mode
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.868 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.871 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.871 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped91d6bf-50, col_values=(('external_ids', {'iface-id': '1bce7dcb-6145-475b-bb44-6a2b9bd7cbf1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.872 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:38 compute-0 ovn_controller[155020]: 2025-11-25T09:55:38Z|00043|binding|INFO|Releasing lport 1bce7dcb-6145-475b-bb44-6a2b9bd7cbf1 from this chassis (sb_readonly=0)
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.888 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.889 164791 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed91d6bf-56aa-4e17-a7ca-48f04cae081d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed91d6bf-56aa-4e17-a7ca-48f04cae081d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.889 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[fe693fee-029d-404d-999d-ed3aeeb50c59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.890 164791 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: global
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     log         /dev/log local0 debug
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     log-tag     haproxy-metadata-proxy-ed91d6bf-56aa-4e17-a7ca-48f04cae081d
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     user        root
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     group       root
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     maxconn     1024
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     pidfile     /var/lib/neutron/external/pids/ed91d6bf-56aa-4e17-a7ca-48f04cae081d.pid.haproxy
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     daemon
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: defaults
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     log global
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     mode http
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     option httplog
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     option dontlognull
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     option http-server-close
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     option forwardfor
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     retries                 3
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     timeout http-request    30s
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     timeout connect         30s
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     timeout client          32s
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     timeout server          32s
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     timeout http-keep-alive 30s
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: listen listener
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     bind 169.254.169.254:80
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:     http-request add-header X-OVN-Network-ID ed91d6bf-56aa-4e17-a7ca-48f04cae081d
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:55:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:38.891 164791 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'env', 'PROCESS_TAG=haproxy-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed91d6bf-56aa-4e17-a7ca-48f04cae081d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.921 253516 DEBUG nova.compute.manager [req-fb81cdc6-1520-400a-9057-e73dc685e3bf req-c9fb81aa-d678-47f6-8d86-a279176ffeda c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Received event network-vif-plugged-60c6f2c0-ef30-4463-9cb7-83925fe7d146 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.922 253516 DEBUG oslo_concurrency.lockutils [req-fb81cdc6-1520-400a-9057-e73dc685e3bf req-c9fb81aa-d678-47f6-8d86-a279176ffeda c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.922 253516 DEBUG oslo_concurrency.lockutils [req-fb81cdc6-1520-400a-9057-e73dc685e3bf req-c9fb81aa-d678-47f6-8d86-a279176ffeda c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.923 253516 DEBUG oslo_concurrency.lockutils [req-fb81cdc6-1520-400a-9057-e73dc685e3bf req-c9fb81aa-d678-47f6-8d86-a279176ffeda c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.923 253516 DEBUG nova.compute.manager [req-fb81cdc6-1520-400a-9057-e73dc685e3bf req-c9fb81aa-d678-47f6-8d86-a279176ffeda c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Processing event network-vif-plugged-60c6f2c0-ef30-4463-9cb7-83925fe7d146 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.949 253516 DEBUG nova.network.neutron [req-e11ac6bf-8d01-4724-8f4f-891bb2b45774 req-3f64dada-bbf4-4253-9f24-114665687837 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Updated VIF entry in instance network info cache for port 60c6f2c0-ef30-4463-9cb7-83925fe7d146. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.949 253516 DEBUG nova.network.neutron [req-e11ac6bf-8d01-4724-8f4f-891bb2b45774 req-3f64dada-bbf4-4253-9f24-114665687837 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Updating instance_info_cache with network_info: [{"id": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "address": "fa:16:3e:e5:53:15", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c6f2c0-ef", "ovs_interfaceid": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:55:38 compute-0 nova_compute[253512]: 2025-11-25 09:55:38.962 253516 DEBUG oslo_concurrency.lockutils [req-e11ac6bf-8d01-4724-8f4f-891bb2b45774 req-3f64dada-bbf4-4253-9f24-114665687837 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-05a3fbe7-a832-4fb6-ad57-bfdd256afc57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.082 253516 DEBUG nova.compute.manager [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.088 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064539.0877242, 05a3fbe7-a832-4fb6-ad57-bfdd256afc57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.088 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] VM Started (Lifecycle Event)
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.091 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.094 253516 INFO nova.virt.libvirt.driver [-] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Instance spawned successfully.
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.094 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.106 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.111 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.113 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.114 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.114 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.114 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.114 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.115 253516 DEBUG nova.virt.libvirt.driver [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.122 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.122 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064539.0878024, 05a3fbe7-a832-4fb6-ad57-bfdd256afc57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.122 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] VM Paused (Lifecycle Event)
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.143 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.145 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064539.0903406, 05a3fbe7-a832-4fb6-ad57-bfdd256afc57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.146 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] VM Resumed (Lifecycle Event)
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.160 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.162 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.188 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:55:39 compute-0 podman[262727]: 2025-11-25 09:55:39.192439927 +0000 UTC m=+0.030115300 container create c2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.200 253516 INFO nova.compute.manager [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Took 8.15 seconds to spawn the instance on the hypervisor.
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.200 253516 DEBUG nova.compute.manager [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:55:39 compute-0 systemd[1]: Started libpod-conmon-c2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155.scope.
Nov 25 09:55:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:55:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05afbf5a06fe153bfb5986505fb6456a280d2d3762699da3a23b7ed071fa35a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:39 compute-0 podman[262727]: 2025-11-25 09:55:39.249522862 +0000 UTC m=+0.087198236 container init c2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 09:55:39 compute-0 podman[262727]: 2025-11-25 09:55:39.253922986 +0000 UTC m=+0.091598360 container start c2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:55:39 compute-0 podman[262727]: 2025-11-25 09:55:39.178796668 +0000 UTC m=+0.016472052 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:55:39 compute-0 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[262740]: [NOTICE]   (262744) : New worker (262746) forked
Nov 25 09:55:39 compute-0 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[262740]: [NOTICE]   (262744) : Loading success.
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.274 253516 INFO nova.compute.manager [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Took 8.80 seconds to build instance.
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.292 253516 DEBUG oslo_concurrency.lockutils [None req-0853906c-799b-41e7-8a35-8e46c081f9df c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:39 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:55:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:39 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:55:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v733: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:55:39 compute-0 nova_compute[253512]: 2025-11-25 09:55:39.679 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:39.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:55:40] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 25 09:55:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:55:40] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 25 09:55:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:40.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:40 compute-0 ceph-mon[74207]: pgmap v733: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:55:41 compute-0 nova_compute[253512]: 2025-11-25 09:55:41.000 253516 DEBUG nova.compute.manager [req-ac595f6d-f1de-44ed-9f04-cf0b4c7371ba req-d8c70bb3-06c1-4b49-8a5a-77b016070493 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Received event network-vif-plugged-60c6f2c0-ef30-4463-9cb7-83925fe7d146 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:55:41 compute-0 nova_compute[253512]: 2025-11-25 09:55:41.000 253516 DEBUG oslo_concurrency.lockutils [req-ac595f6d-f1de-44ed-9f04-cf0b4c7371ba req-d8c70bb3-06c1-4b49-8a5a-77b016070493 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:41 compute-0 nova_compute[253512]: 2025-11-25 09:55:41.000 253516 DEBUG oslo_concurrency.lockutils [req-ac595f6d-f1de-44ed-9f04-cf0b4c7371ba req-d8c70bb3-06c1-4b49-8a5a-77b016070493 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:41 compute-0 nova_compute[253512]: 2025-11-25 09:55:41.001 253516 DEBUG oslo_concurrency.lockutils [req-ac595f6d-f1de-44ed-9f04-cf0b4c7371ba req-d8c70bb3-06c1-4b49-8a5a-77b016070493 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:41 compute-0 nova_compute[253512]: 2025-11-25 09:55:41.001 253516 DEBUG nova.compute.manager [req-ac595f6d-f1de-44ed-9f04-cf0b4c7371ba req-d8c70bb3-06c1-4b49-8a5a-77b016070493 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] No waiting events found dispatching network-vif-plugged-60c6f2c0-ef30-4463-9cb7-83925fe7d146 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:55:41 compute-0 nova_compute[253512]: 2025-11-25 09:55:41.001 253516 WARNING nova.compute.manager [req-ac595f6d-f1de-44ed-9f04-cf0b4c7371ba req-d8c70bb3-06c1-4b49-8a5a-77b016070493 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Received unexpected event network-vif-plugged-60c6f2c0-ef30-4463-9cb7-83925fe7d146 for instance with vm_state active and task_state None.
Nov 25 09:55:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v734: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 09:55:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:41.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:42 compute-0 sudo[262755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:55:42 compute-0 sudo[262755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:42 compute-0 sudo[262755]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:42 compute-0 NetworkManager[48903]: <info>  [1764064542.2739] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 25 09:55:42 compute-0 NetworkManager[48903]: <info>  [1764064542.2746] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 25 09:55:42 compute-0 nova_compute[253512]: 2025-11-25 09:55:42.274 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:42 compute-0 ovn_controller[155020]: 2025-11-25T09:55:42Z|00044|binding|INFO|Releasing lport 1bce7dcb-6145-475b-bb44-6a2b9bd7cbf1 from this chassis (sb_readonly=0)
Nov 25 09:55:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:42.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:42 compute-0 nova_compute[253512]: 2025-11-25 09:55:42.315 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:42 compute-0 ovn_controller[155020]: 2025-11-25T09:55:42Z|00045|binding|INFO|Releasing lport 1bce7dcb-6145-475b-bb44-6a2b9bd7cbf1 from this chassis (sb_readonly=0)
Nov 25 09:55:42 compute-0 nova_compute[253512]: 2025-11-25 09:55:42.320 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:42 compute-0 nova_compute[253512]: 2025-11-25 09:55:42.479 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:42 compute-0 nova_compute[253512]: 2025-11-25 09:55:42.759 253516 DEBUG nova.compute.manager [req-003b6783-d64e-4c18-aa71-edaedc9f1590 req-9bfa2935-ba37-42c0-a041-23a853aea235 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Received event network-changed-60c6f2c0-ef30-4463-9cb7-83925fe7d146 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:55:42 compute-0 nova_compute[253512]: 2025-11-25 09:55:42.759 253516 DEBUG nova.compute.manager [req-003b6783-d64e-4c18-aa71-edaedc9f1590 req-9bfa2935-ba37-42c0-a041-23a853aea235 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Refreshing instance network info cache due to event network-changed-60c6f2c0-ef30-4463-9cb7-83925fe7d146. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:55:42 compute-0 nova_compute[253512]: 2025-11-25 09:55:42.759 253516 DEBUG oslo_concurrency.lockutils [req-003b6783-d64e-4c18-aa71-edaedc9f1590 req-9bfa2935-ba37-42c0-a041-23a853aea235 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-05a3fbe7-a832-4fb6-ad57-bfdd256afc57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:55:42 compute-0 nova_compute[253512]: 2025-11-25 09:55:42.760 253516 DEBUG oslo_concurrency.lockutils [req-003b6783-d64e-4c18-aa71-edaedc9f1590 req-9bfa2935-ba37-42c0-a041-23a853aea235 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-05a3fbe7-a832-4fb6-ad57-bfdd256afc57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:55:42 compute-0 nova_compute[253512]: 2025-11-25 09:55:42.760 253516 DEBUG nova.network.neutron [req-003b6783-d64e-4c18-aa71-edaedc9f1590 req-9bfa2935-ba37-42c0-a041-23a853aea235 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Refreshing network info cache for port 60c6f2c0-ef30-4463-9cb7-83925fe7d146 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:55:42 compute-0 ceph-mon[74207]: pgmap v734: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 09:55:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v735: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 09:55:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:43.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:44.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:44 compute-0 nova_compute[253512]: 2025-11-25 09:55:44.339 253516 DEBUG nova.network.neutron [req-003b6783-d64e-4c18-aa71-edaedc9f1590 req-9bfa2935-ba37-42c0-a041-23a853aea235 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Updated VIF entry in instance network info cache for port 60c6f2c0-ef30-4463-9cb7-83925fe7d146. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:55:44 compute-0 nova_compute[253512]: 2025-11-25 09:55:44.340 253516 DEBUG nova.network.neutron [req-003b6783-d64e-4c18-aa71-edaedc9f1590 req-9bfa2935-ba37-42c0-a041-23a853aea235 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Updating instance_info_cache with network_info: [{"id": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "address": "fa:16:3e:e5:53:15", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c6f2c0-ef", "ovs_interfaceid": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:55:44 compute-0 nova_compute[253512]: 2025-11-25 09:55:44.361 253516 DEBUG oslo_concurrency.lockutils [req-003b6783-d64e-4c18-aa71-edaedc9f1590 req-9bfa2935-ba37-42c0-a041-23a853aea235 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-05a3fbe7-a832-4fb6-ad57-bfdd256afc57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:55:44 compute-0 nova_compute[253512]: 2025-11-25 09:55:44.682 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:44 compute-0 ceph-mon[74207]: pgmap v735: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 09:55:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:55:44
Nov 25 09:55:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:55:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:55:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.nfs', 'images', '.mgr', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.rgw.root']
Nov 25 09:55:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:55:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:55:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:55:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:55:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:55:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:55:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:55:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:55:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:55:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:55:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:55:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:55:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:55:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:55:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:55:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:55:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:55:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:55:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v736: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 09:55:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262425]: 25/11/2025 09:55:45 : epoch 69257d15 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb82c000df0 fd 39 proxy ignored for local
Nov 25 09:55:45 compute-0 kernel: ganesha.nfsd[262791]: segfault at 50 ip 00007fb8d9a5932e sp 00007fb89dffa210 error 4 in libntirpc.so.5.8[7fb8d9a3e000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 25 09:55:45 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:55:45 compute-0 systemd[1]: Started Process Core Dump (PID 262798/UID 0).
Nov 25 09:55:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:45.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:46.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:46 compute-0 systemd-coredump[262799]: Process 262429 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 48:
                                                    #0  0x00007fb8d9a5932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:55:46 compute-0 systemd[1]: systemd-coredump@12-262798-0.service: Deactivated successfully.
Nov 25 09:55:46 compute-0 podman[262805]: 2025-11-25 09:55:46.826499726 +0000 UTC m=+0.025926875 container died a984ad0109bdc29fe2f01737d192d6222f6e3da649483f064a38cb6604dc6e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 09:55:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-febab89d2b0e738e0582178a91e806792f37089b55c62b3151de4d8b631c5d01-merged.mount: Deactivated successfully.
Nov 25 09:55:46 compute-0 ceph-mon[74207]: pgmap v736: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 09:55:46 compute-0 podman[262805]: 2025-11-25 09:55:46.870550606 +0000 UTC m=+0.069977734 container remove a984ad0109bdc29fe2f01737d192d6222f6e3da649483f064a38cb6604dc6e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:55:46 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:55:46 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:55:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:47.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:47.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:47.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:47.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:47 compute-0 nova_compute[253512]: 2025-11-25 09:55:47.479 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v737: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Nov 25 09:55:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:47.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:48.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:48 compute-0 ceph-mon[74207]: pgmap v737: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Nov 25 09:55:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v738: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 77 op/s
Nov 25 09:55:49 compute-0 nova_compute[253512]: 2025-11-25 09:55:49.684 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.002000020s ======
Nov 25 09:55:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:49.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Nov 25 09:55:50 compute-0 ovn_controller[155020]: 2025-11-25T09:55:50Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:53:15 10.100.0.6
Nov 25 09:55:50 compute-0 ovn_controller[155020]: 2025-11-25T09:55:50Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:53:15 10.100.0.6
Nov 25 09:55:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:55:50] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 25 09:55:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:55:50] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 25 09:55:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:50.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:50 compute-0 ceph-mon[74207]: pgmap v738: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 77 op/s
Nov 25 09:55:50 compute-0 podman[262846]: 2025-11-25 09:55:50.986724258 +0000 UTC m=+0.048499728 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 09:55:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v739: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 09:55:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:51.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:52.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:52 compute-0 nova_compute[253512]: 2025-11-25 09:55:52.480 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:52 compute-0 ceph-mon[74207]: pgmap v739: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 09:55:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v740: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:55:53 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/785351946' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:55:53 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/785351946' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:55:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:53.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:54.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:54 compute-0 nova_compute[253512]: 2025-11-25 09:55:54.688 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:54 compute-0 ceph-mon[74207]: pgmap v740: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001516511200104187 of space, bias 1.0, pg target 0.4549533600312561 quantized to 32 (current 32)
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:55:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v741: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:55:55 compute-0 ceph-mon[74207]: pgmap v741: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:55:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:55.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:56.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:57.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:57.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:57.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:55:57.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:55:57 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 13.
Nov 25 09:55:57 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:55:57 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 09:55:57 compute-0 podman[262869]: 2025-11-25 09:55:57.249373473 +0000 UTC m=+0.063303051 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 25 09:55:57 compute-0 podman[262930]: 2025-11-25 09:55:57.360293254 +0000 UTC m=+0.034687990 container create 1aa73363a44015985c0c74291440fe0491443ad902da42232bdda08b78cde9bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b476ebc4afaffb4a12c64d9aa6daae74df5dd9cebc157d7e529a5588d57c9e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b476ebc4afaffb4a12c64d9aa6daae74df5dd9cebc157d7e529a5588d57c9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b476ebc4afaffb4a12c64d9aa6daae74df5dd9cebc157d7e529a5588d57c9e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b476ebc4afaffb4a12c64d9aa6daae74df5dd9cebc157d7e529a5588d57c9e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rychik-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:57 compute-0 podman[262930]: 2025-11-25 09:55:57.406955255 +0000 UTC m=+0.081350002 container init 1aa73363a44015985c0c74291440fe0491443ad902da42232bdda08b78cde9bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:55:57 compute-0 podman[262930]: 2025-11-25 09:55:57.41263924 +0000 UTC m=+0.087033966 container start 1aa73363a44015985c0c74291440fe0491443ad902da42232bdda08b78cde9bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 25 09:55:57 compute-0 bash[262930]: 1aa73363a44015985c0c74291440fe0491443ad902da42232bdda08b78cde9bb
Nov 25 09:55:57 compute-0 podman[262930]: 2025-11-25 09:55:57.345245608 +0000 UTC m=+0.019640355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:55:57 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:55:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:55:57 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 09:55:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:55:57 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 09:55:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:55:57 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 09:55:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:55:57 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 09:55:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:55:57 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 09:55:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:55:57 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 09:55:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:55:57 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 09:55:57 compute-0 nova_compute[253512]: 2025-11-25 09:55:57.482 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:55:57 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:55:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v742: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:55:57 compute-0 nova_compute[253512]: 2025-11-25 09:55:57.557 253516 INFO nova.compute.manager [None req-ba406653-1ea6-495c-a9df-2aa525908068 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Get console output
Nov 25 09:55:57 compute-0 nova_compute[253512]: 2025-11-25 09:55:57.560 259829 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 09:55:57 compute-0 sudo[262984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:55:57 compute-0 sudo[262984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:57 compute-0 sudo[262984]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:57 compute-0 sudo[263009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:55:57 compute-0 sudo[263009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:55:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:57.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:57 compute-0 nova_compute[253512]: 2025-11-25 09:55:57.988 253516 DEBUG oslo_concurrency.lockutils [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:57 compute-0 nova_compute[253512]: 2025-11-25 09:55:57.988 253516 DEBUG oslo_concurrency.lockutils [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:57 compute-0 nova_compute[253512]: 2025-11-25 09:55:57.988 253516 DEBUG oslo_concurrency.lockutils [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:57 compute-0 nova_compute[253512]: 2025-11-25 09:55:57.988 253516 DEBUG oslo_concurrency.lockutils [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:57 compute-0 nova_compute[253512]: 2025-11-25 09:55:57.989 253516 DEBUG oslo_concurrency.lockutils [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:57 compute-0 nova_compute[253512]: 2025-11-25 09:55:57.989 253516 INFO nova.compute.manager [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Terminating instance
Nov 25 09:55:57 compute-0 nova_compute[253512]: 2025-11-25 09:55:57.990 253516 DEBUG nova.compute.manager [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:55:58 compute-0 kernel: tap60c6f2c0-ef (unregistering): left promiscuous mode
Nov 25 09:55:58 compute-0 NetworkManager[48903]: <info>  [1764064558.0254] device (tap60c6f2c0-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.035 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:58 compute-0 ovn_controller[155020]: 2025-11-25T09:55:58Z|00046|binding|INFO|Releasing lport 60c6f2c0-ef30-4463-9cb7-83925fe7d146 from this chassis (sb_readonly=0)
Nov 25 09:55:58 compute-0 ovn_controller[155020]: 2025-11-25T09:55:58Z|00047|binding|INFO|Setting lport 60c6f2c0-ef30-4463-9cb7-83925fe7d146 down in Southbound
Nov 25 09:55:58 compute-0 ovn_controller[155020]: 2025-11-25T09:55:58Z|00048|binding|INFO|Removing iface tap60c6f2c0-ef ovn-installed in OVS
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.038 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.041 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:53:15 10.100.0.6'], port_security=['fa:16:3e:e5:53:15 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '05a3fbe7-a832-4fb6-ad57-bfdd256afc57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26e2b899-e667-4c76-b6b1-5134ece3a582', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6eca7baa-8b78-4122-aef9-182609bf4892, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=60c6f2c0-ef30-4463-9cb7-83925fe7d146) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.042 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 60c6f2c0-ef30-4463-9cb7-83925fe7d146 in datapath ed91d6bf-56aa-4e17-a7ca-48f04cae081d unbound from our chassis
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.043 164791 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed91d6bf-56aa-4e17-a7ca-48f04cae081d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.045 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[9295d09f-dc47-4648-ad19-2eca259a0bf3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.046 164791 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d namespace which is not needed anymore
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.071 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:58 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 25 09:55:58 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 11.088s CPU time.
Nov 25 09:55:58 compute-0 systemd-machined[216497]: Machine qemu-2-instance-00000005 terminated.
Nov 25 09:55:58 compute-0 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[262740]: [NOTICE]   (262744) : haproxy version is 2.8.14-c23fe91
Nov 25 09:55:58 compute-0 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[262740]: [NOTICE]   (262744) : path to executable is /usr/sbin/haproxy
Nov 25 09:55:58 compute-0 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[262740]: [WARNING]  (262744) : Exiting Master process...
Nov 25 09:55:58 compute-0 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[262740]: [WARNING]  (262744) : Exiting Master process...
Nov 25 09:55:58 compute-0 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[262740]: [ALERT]    (262744) : Current worker (262746) exited with code 143 (Terminated)
Nov 25 09:55:58 compute-0 neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d[262740]: [WARNING]  (262744) : All workers exited. Exiting... (0)
Nov 25 09:55:58 compute-0 systemd[1]: libpod-c2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155.scope: Deactivated successfully.
Nov 25 09:55:58 compute-0 podman[263077]: 2025-11-25 09:55:58.161308403 +0000 UTC m=+0.035421624 container died c2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 09:55:58 compute-0 sudo[263009]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f05afbf5a06fe153bfb5986505fb6456a280d2d3762699da3a23b7ed071fa35a-merged.mount: Deactivated successfully.
Nov 25 09:55:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155-userdata-shm.mount: Deactivated successfully.
Nov 25 09:55:58 compute-0 podman[263077]: 2025-11-25 09:55:58.183563345 +0000 UTC m=+0.057676566 container cleanup c2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:55:58 compute-0 systemd[1]: libpod-conmon-c2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155.scope: Deactivated successfully.
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.207 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 25 09:55:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:55:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 25 09:55:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:55:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 25 09:55:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.216 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.221 253516 INFO nova.virt.libvirt.driver [-] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Instance destroyed successfully.
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.221 253516 DEBUG nova.objects.instance [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid 05a3fbe7-a832-4fb6-ad57-bfdd256afc57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:55:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:55:58 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:55:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:55:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.234 253516 DEBUG nova.virt.libvirt.vif [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:55:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1838931960',display_name='tempest-TestNetworkBasicOps-server-1838931960',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1838931960',id=5,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5JfKrSzRW3uIA2kb/BdlbYK+cNdQ7voGr9f4HHF406651oaxxcr/stjsonis3f8jxE5v2FRbpiulcVQUAQR/seSqNnq9wzlhs6aIac2P8DyWAsbG/pyb1H9xyF++6iRA==',key_name='tempest-TestNetworkBasicOps-1692779666',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:55:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-0pvttemy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:55:39Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=05a3fbe7-a832-4fb6-ad57-bfdd256afc57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "address": "fa:16:3e:e5:53:15", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c6f2c0-ef", "ovs_interfaceid": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.234 253516 DEBUG nova.network.os_vif_util [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "address": "fa:16:3e:e5:53:15", "network": {"id": "ed91d6bf-56aa-4e17-a7ca-48f04cae081d", "bridge": "br-int", "label": "tempest-network-smoke--236832573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c6f2c0-ef", "ovs_interfaceid": "60c6f2c0-ef30-4463-9cb7-83925fe7d146", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.235 253516 DEBUG nova.network.os_vif_util [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:53:15,bridge_name='br-int',has_traffic_filtering=True,id=60c6f2c0-ef30-4463-9cb7-83925fe7d146,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c6f2c0-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.235 253516 DEBUG os_vif [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:53:15,bridge_name='br-int',has_traffic_filtering=True,id=60c6f2c0-ef30-4463-9cb7-83925fe7d146,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c6f2c0-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.236 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.237 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60c6f2c0-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.238 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.240 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.244 253516 INFO os_vif [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:53:15,bridge_name='br-int',has_traffic_filtering=True,id=60c6f2c0-ef30-4463-9cb7-83925fe7d146,network=Network(ed91d6bf-56aa-4e17-a7ca-48f04cae081d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c6f2c0-ef')
Nov 25 09:55:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:55:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:55:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:55:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:55:58 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:55:58 compute-0 podman[263111]: 2025-11-25 09:55:58.260277266 +0000 UTC m=+0.052127925 container remove c2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 09:55:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:55:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.265 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[e72a22bd-b8e1-4777-979f-759a2cc5a291]: (4, ('Tue Nov 25 09:55:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d (c2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155)\nc2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155\nTue Nov 25 09:55:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d (c2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155)\nc2ee1d71e29377cc9423989989146e2ba370e567e1fc69a0c984989491123155\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:55:58 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.267 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[4d81013e-383a-45a0-82b9-75550740d646]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.270 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped91d6bf-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:55:58 compute-0 kernel: taped91d6bf-50: left promiscuous mode
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.274 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.290 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:55:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:58.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.293 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[a621b1dd-ac28-4353-a1eb-052f7000d6dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.304 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[58b153f5-fc9c-4838-bff4-59344e838569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.305 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[778a5ef2-c960-427e-8a95-c10348e475bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.317 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[f96b78d8-f7ea-4082-a04d-30339d3ae735]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 331929, 'reachable_time': 23347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263171, 'error': None, 'target': 'ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:58 compute-0 systemd[1]: run-netns-ovnmeta\x2ded91d6bf\x2d56aa\x2d4e17\x2da7ca\x2d48f04cae081d.mount: Deactivated successfully.
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.319 164901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed91d6bf-56aa-4e17-a7ca-48f04cae081d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:55:58 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:55:58.319 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfceb49-a0fc-43b3-a53a-9994b8aa3acc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:55:58 compute-0 sudo[263146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:55:58 compute-0 sudo[263146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:58 compute-0 sudo[263146]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.354 253516 DEBUG nova.compute.manager [req-33374911-941b-433e-a0f1-f3baceb70613 req-95535eca-b236-42a1-95b8-1026164d4c8f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Received event network-vif-unplugged-60c6f2c0-ef30-4463-9cb7-83925fe7d146 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.354 253516 DEBUG oslo_concurrency.lockutils [req-33374911-941b-433e-a0f1-f3baceb70613 req-95535eca-b236-42a1-95b8-1026164d4c8f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:58 compute-0 sudo[263177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:55:58 compute-0 sudo[263177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.497 253516 DEBUG oslo_concurrency.lockutils [req-33374911-941b-433e-a0f1-f3baceb70613 req-95535eca-b236-42a1-95b8-1026164d4c8f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.497 253516 DEBUG oslo_concurrency.lockutils [req-33374911-941b-433e-a0f1-f3baceb70613 req-95535eca-b236-42a1-95b8-1026164d4c8f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.498 253516 DEBUG nova.compute.manager [req-33374911-941b-433e-a0f1-f3baceb70613 req-95535eca-b236-42a1-95b8-1026164d4c8f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] No waiting events found dispatching network-vif-unplugged-60c6f2c0-ef30-4463-9cb7-83925fe7d146 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.498 253516 DEBUG nova.compute.manager [req-33374911-941b-433e-a0f1-f3baceb70613 req-95535eca-b236-42a1-95b8-1026164d4c8f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Received event network-vif-unplugged-60c6f2c0-ef30-4463-9cb7-83925fe7d146 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:55:58 compute-0 ceph-mon[74207]: pgmap v742: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 09:55:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 09:55:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 09:55:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 09:55:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:55:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:55:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:55:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:55:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:55:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:55:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.573 253516 INFO nova.virt.libvirt.driver [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Deleting instance files /var/lib/nova/instances/05a3fbe7-a832-4fb6-ad57-bfdd256afc57_del
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.574 253516 INFO nova.virt.libvirt.driver [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Deletion of /var/lib/nova/instances/05a3fbe7-a832-4fb6-ad57-bfdd256afc57_del complete
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.630 253516 INFO nova.compute.manager [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Took 0.64 seconds to destroy the instance on the hypervisor.
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.631 253516 DEBUG oslo.service.loopingcall [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.631 253516 DEBUG nova.compute.manager [-] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:55:58 compute-0 nova_compute[253512]: 2025-11-25 09:55:58.631 253516 DEBUG nova.network.neutron [-] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:55:58 compute-0 podman[263235]: 2025-11-25 09:55:58.721666487 +0000 UTC m=+0.030229105 container create 25595fe1f88c9985a0bd4c45ab01979ee29222b921982a2ad143f5034e4bdf7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:55:58 compute-0 systemd[1]: Started libpod-conmon-25595fe1f88c9985a0bd4c45ab01979ee29222b921982a2ad143f5034e4bdf7f.scope.
Nov 25 09:55:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:55:58 compute-0 podman[263235]: 2025-11-25 09:55:58.788142569 +0000 UTC m=+0.096705177 container init 25595fe1f88c9985a0bd4c45ab01979ee29222b921982a2ad143f5034e4bdf7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:55:58 compute-0 podman[263235]: 2025-11-25 09:55:58.793992847 +0000 UTC m=+0.102555455 container start 25595fe1f88c9985a0bd4c45ab01979ee29222b921982a2ad143f5034e4bdf7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_buck, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 09:55:58 compute-0 podman[263235]: 2025-11-25 09:55:58.796132381 +0000 UTC m=+0.104694989 container attach 25595fe1f88c9985a0bd4c45ab01979ee29222b921982a2ad143f5034e4bdf7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_buck, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 09:55:58 compute-0 epic_buck[263248]: 167 167
Nov 25 09:55:58 compute-0 systemd[1]: libpod-25595fe1f88c9985a0bd4c45ab01979ee29222b921982a2ad143f5034e4bdf7f.scope: Deactivated successfully.
Nov 25 09:55:58 compute-0 conmon[263248]: conmon 25595fe1f88c9985a0bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25595fe1f88c9985a0bd4c45ab01979ee29222b921982a2ad143f5034e4bdf7f.scope/container/memory.events
Nov 25 09:55:58 compute-0 podman[263235]: 2025-11-25 09:55:58.799526578 +0000 UTC m=+0.108089186 container died 25595fe1f88c9985a0bd4c45ab01979ee29222b921982a2ad143f5034e4bdf7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Nov 25 09:55:58 compute-0 podman[263235]: 2025-11-25 09:55:58.710768233 +0000 UTC m=+0.019330851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:55:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-73a8cef0336751ba51ce9cd3f50aef002456365552940bcf839c79798a1884b0-merged.mount: Deactivated successfully.
Nov 25 09:55:58 compute-0 podman[263235]: 2025-11-25 09:55:58.83324511 +0000 UTC m=+0.141807718 container remove 25595fe1f88c9985a0bd4c45ab01979ee29222b921982a2ad143f5034e4bdf7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_buck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:55:58 compute-0 systemd[1]: libpod-conmon-25595fe1f88c9985a0bd4c45ab01979ee29222b921982a2ad143f5034e4bdf7f.scope: Deactivated successfully.
Nov 25 09:55:58 compute-0 podman[263270]: 2025-11-25 09:55:58.979114144 +0000 UTC m=+0.032371754 container create e10e0e2a33190f478271224fd674117920a6cfdf84a2930b4d0afd00a07c9d03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:55:59 compute-0 systemd[1]: Started libpod-conmon-e10e0e2a33190f478271224fd674117920a6cfdf84a2930b4d0afd00a07c9d03.scope.
Nov 25 09:55:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a781c61c9451a82893a7cd3fdc79af6f3d9dd1b99a2361021e14b5b335c51dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a781c61c9451a82893a7cd3fdc79af6f3d9dd1b99a2361021e14b5b335c51dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a781c61c9451a82893a7cd3fdc79af6f3d9dd1b99a2361021e14b5b335c51dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a781c61c9451a82893a7cd3fdc79af6f3d9dd1b99a2361021e14b5b335c51dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a781c61c9451a82893a7cd3fdc79af6f3d9dd1b99a2361021e14b5b335c51dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:55:59 compute-0 podman[263270]: 2025-11-25 09:55:59.046932947 +0000 UTC m=+0.100190557 container init e10e0e2a33190f478271224fd674117920a6cfdf84a2930b4d0afd00a07c9d03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_elgamal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:55:59 compute-0 podman[263270]: 2025-11-25 09:55:59.055288327 +0000 UTC m=+0.108545936 container start e10e0e2a33190f478271224fd674117920a6cfdf84a2930b4d0afd00a07c9d03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:55:59 compute-0 podman[263270]: 2025-11-25 09:55:59.056686833 +0000 UTC m=+0.109944443 container attach e10e0e2a33190f478271224fd674117920a6cfdf84a2930b4d0afd00a07c9d03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_elgamal, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:55:59 compute-0 podman[263270]: 2025-11-25 09:55:58.967085409 +0000 UTC m=+0.020343039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.289 253516 DEBUG nova.network.neutron [-] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.307 253516 INFO nova.compute.manager [-] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Took 0.68 seconds to deallocate network for instance.
Nov 25 09:55:59 compute-0 frosty_elgamal[263283]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:55:59 compute-0 frosty_elgamal[263283]: --> All data devices are unavailable
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.343 253516 DEBUG oslo_concurrency.lockutils [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.343 253516 DEBUG oslo_concurrency.lockutils [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.352 253516 DEBUG nova.compute.manager [req-36ff24b7-6337-42ea-bc63-4f546a411ff7 req-99f1d450-04c4-43b7-a590-26d29b294900 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Received event network-vif-deleted-60c6f2c0-ef30-4463-9cb7-83925fe7d146 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:55:59 compute-0 systemd[1]: libpod-e10e0e2a33190f478271224fd674117920a6cfdf84a2930b4d0afd00a07c9d03.scope: Deactivated successfully.
Nov 25 09:55:59 compute-0 podman[263298]: 2025-11-25 09:55:59.394286632 +0000 UTC m=+0.017782813 container died e10e0e2a33190f478271224fd674117920a6cfdf84a2930b4d0afd00a07c9d03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.400 253516 DEBUG oslo_concurrency.processutils [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a781c61c9451a82893a7cd3fdc79af6f3d9dd1b99a2361021e14b5b335c51dc-merged.mount: Deactivated successfully.
Nov 25 09:55:59 compute-0 podman[263298]: 2025-11-25 09:55:59.42587055 +0000 UTC m=+0.049366720 container remove e10e0e2a33190f478271224fd674117920a6cfdf84a2930b4d0afd00a07c9d03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:55:59 compute-0 systemd[1]: libpod-conmon-e10e0e2a33190f478271224fd674117920a6cfdf84a2930b4d0afd00a07c9d03.scope: Deactivated successfully.
Nov 25 09:55:59 compute-0 sudo[263177]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v743: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:55:59 compute-0 sudo[263311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:55:59 compute-0 sudo[263311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:59 compute-0 sudo[263311]: pam_unix(sudo:session): session closed for user root
Nov 25 09:55:59 compute-0 sudo[263355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:55:59 compute-0 sudo[263355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.689 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.799 253516 DEBUG oslo_concurrency.processutils [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.804 253516 DEBUG nova.compute.provider_tree [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.824 253516 DEBUG nova.scheduler.client.report [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.842 253516 DEBUG oslo_concurrency.lockutils [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.872 253516 INFO nova.scheduler.client.report [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance 05a3fbe7-a832-4fb6-ad57-bfdd256afc57
Nov 25 09:55:59 compute-0 nova_compute[253512]: 2025-11-25 09:55:59.919 253516 DEBUG oslo_concurrency.lockutils [None req-a93465be-c34c-458f-9868-c0a63eed7ed7 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:55:59 compute-0 podman[263416]: 2025-11-25 09:55:59.952573572 +0000 UTC m=+0.045584391 container create e67b933589f37ab376edb1664b3d37ae431f96d64ac0d7f31780e42a6bb4bb23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 09:55:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:55:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:55:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:55:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:55:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:59.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:55:59 compute-0 systemd[1]: Started libpod-conmon-e67b933589f37ab376edb1664b3d37ae431f96d64ac0d7f31780e42a6bb4bb23.scope.
Nov 25 09:56:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:56:00 compute-0 podman[263416]: 2025-11-25 09:56:00.019967524 +0000 UTC m=+0.112978354 container init e67b933589f37ab376edb1664b3d37ae431f96d64ac0d7f31780e42a6bb4bb23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:56:00 compute-0 podman[263416]: 2025-11-25 09:56:00.025068029 +0000 UTC m=+0.118078838 container start e67b933589f37ab376edb1664b3d37ae431f96d64ac0d7f31780e42a6bb4bb23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:56:00 compute-0 podman[263416]: 2025-11-25 09:56:00.026220711 +0000 UTC m=+0.119231521 container attach e67b933589f37ab376edb1664b3d37ae431f96d64ac0d7f31780e42a6bb4bb23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_morse, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 09:56:00 compute-0 festive_morse[263430]: 167 167
Nov 25 09:56:00 compute-0 podman[263416]: 2025-11-25 09:55:59.934104937 +0000 UTC m=+0.027115766 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:56:00 compute-0 systemd[1]: libpod-e67b933589f37ab376edb1664b3d37ae431f96d64ac0d7f31780e42a6bb4bb23.scope: Deactivated successfully.
Nov 25 09:56:00 compute-0 podman[263416]: 2025-11-25 09:56:00.030926032 +0000 UTC m=+0.123936840 container died e67b933589f37ab376edb1664b3d37ae431f96d64ac0d7f31780e42a6bb4bb23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 25 09:56:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-78dcf9c9a7648ccacf072972133196f8fe4585e42ca4a267a6dde98c6fc70cb8-merged.mount: Deactivated successfully.
Nov 25 09:56:00 compute-0 podman[263416]: 2025-11-25 09:56:00.050597294 +0000 UTC m=+0.143608103 container remove e67b933589f37ab376edb1664b3d37ae431f96d64ac0d7f31780e42a6bb4bb23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 09:56:00 compute-0 systemd[1]: libpod-conmon-e67b933589f37ab376edb1664b3d37ae431f96d64ac0d7f31780e42a6bb4bb23.scope: Deactivated successfully.
Nov 25 09:56:00 compute-0 podman[263450]: 2025-11-25 09:56:00.205063843 +0000 UTC m=+0.032612528 container create 404cfe2991dc5c71f3c8e68061fa612823d34898e7092dd2c08849da36dad37a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:56:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:56:00] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 25 09:56:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:56:00] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 25 09:56:00 compute-0 systemd[1]: Started libpod-conmon-404cfe2991dc5c71f3c8e68061fa612823d34898e7092dd2c08849da36dad37a.scope.
Nov 25 09:56:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cdd994363380fb2f0824b945427850fe73a129e447532cfd909063c0bdfb00a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cdd994363380fb2f0824b945427850fe73a129e447532cfd909063c0bdfb00a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cdd994363380fb2f0824b945427850fe73a129e447532cfd909063c0bdfb00a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cdd994363380fb2f0824b945427850fe73a129e447532cfd909063c0bdfb00a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:56:00 compute-0 podman[263450]: 2025-11-25 09:56:00.278169572 +0000 UTC m=+0.105718248 container init 404cfe2991dc5c71f3c8e68061fa612823d34898e7092dd2c08849da36dad37a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:56:00 compute-0 podman[263450]: 2025-11-25 09:56:00.284053675 +0000 UTC m=+0.111602350 container start 404cfe2991dc5c71f3c8e68061fa612823d34898e7092dd2c08849da36dad37a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:56:00 compute-0 podman[263450]: 2025-11-25 09:56:00.285339448 +0000 UTC m=+0.112888133 container attach 404cfe2991dc5c71f3c8e68061fa612823d34898e7092dd2c08849da36dad37a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_torvalds, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:56:00 compute-0 podman[263450]: 2025-11-25 09:56:00.192475022 +0000 UTC m=+0.020023717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:56:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:00.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:00 compute-0 nova_compute[253512]: 2025-11-25 09:56:00.425 253516 DEBUG nova.compute.manager [req-18659b64-61b0-44b9-af5b-eeaff85e5349 req-49db199c-1e72-4da6-8869-0a51144eb73c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Received event network-vif-plugged-60c6f2c0-ef30-4463-9cb7-83925fe7d146 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:56:00 compute-0 nova_compute[253512]: 2025-11-25 09:56:00.426 253516 DEBUG oslo_concurrency.lockutils [req-18659b64-61b0-44b9-af5b-eeaff85e5349 req-49db199c-1e72-4da6-8869-0a51144eb73c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:00 compute-0 nova_compute[253512]: 2025-11-25 09:56:00.426 253516 DEBUG oslo_concurrency.lockutils [req-18659b64-61b0-44b9-af5b-eeaff85e5349 req-49db199c-1e72-4da6-8869-0a51144eb73c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:00 compute-0 nova_compute[253512]: 2025-11-25 09:56:00.426 253516 DEBUG oslo_concurrency.lockutils [req-18659b64-61b0-44b9-af5b-eeaff85e5349 req-49db199c-1e72-4da6-8869-0a51144eb73c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "05a3fbe7-a832-4fb6-ad57-bfdd256afc57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:00 compute-0 nova_compute[253512]: 2025-11-25 09:56:00.426 253516 DEBUG nova.compute.manager [req-18659b64-61b0-44b9-af5b-eeaff85e5349 req-49db199c-1e72-4da6-8869-0a51144eb73c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] No waiting events found dispatching network-vif-plugged-60c6f2c0-ef30-4463-9cb7-83925fe7d146 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:56:00 compute-0 nova_compute[253512]: 2025-11-25 09:56:00.426 253516 WARNING nova.compute.manager [req-18659b64-61b0-44b9-af5b-eeaff85e5349 req-49db199c-1e72-4da6-8869-0a51144eb73c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Received unexpected event network-vif-plugged-60c6f2c0-ef30-4463-9cb7-83925fe7d146 for instance with vm_state deleted and task_state None.
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]: {
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:     "1": [
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:         {
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             "devices": [
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "/dev/loop3"
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             ],
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             "lv_name": "ceph_lv0",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             "lv_size": "21470642176",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             "name": "ceph_lv0",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             "tags": {
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.cluster_name": "ceph",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.crush_device_class": "",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.encrypted": "0",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.osd_id": "1",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.type": "block",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.vdo": "0",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:                 "ceph.with_tpm": "0"
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             },
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             "type": "block",
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:             "vg_name": "ceph_vg0"
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:         }
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]:     ]
Nov 25 09:56:00 compute-0 admiring_torvalds[263463]: }
Nov 25 09:56:00 compute-0 systemd[1]: libpod-404cfe2991dc5c71f3c8e68061fa612823d34898e7092dd2c08849da36dad37a.scope: Deactivated successfully.
Nov 25 09:56:00 compute-0 ceph-mon[74207]: pgmap v743: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 09:56:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3372125422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:56:00 compute-0 podman[263472]: 2025-11-25 09:56:00.574298786 +0000 UTC m=+0.020161026 container died 404cfe2991dc5c71f3c8e68061fa612823d34898e7092dd2c08849da36dad37a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:56:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cdd994363380fb2f0824b945427850fe73a129e447532cfd909063c0bdfb00a-merged.mount: Deactivated successfully.
Nov 25 09:56:00 compute-0 podman[263472]: 2025-11-25 09:56:00.602295864 +0000 UTC m=+0.048158102 container remove 404cfe2991dc5c71f3c8e68061fa612823d34898e7092dd2c08849da36dad37a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_torvalds, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:56:00 compute-0 systemd[1]: libpod-conmon-404cfe2991dc5c71f3c8e68061fa612823d34898e7092dd2c08849da36dad37a.scope: Deactivated successfully.
Nov 25 09:56:00 compute-0 sudo[263355]: pam_unix(sudo:session): session closed for user root
Nov 25 09:56:00 compute-0 sudo[263484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:56:00 compute-0 sudo[263484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:56:00 compute-0 sudo[263484]: pam_unix(sudo:session): session closed for user root
Nov 25 09:56:00 compute-0 sudo[263509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:56:00 compute-0 sudo[263509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:56:01 compute-0 podman[263567]: 2025-11-25 09:56:01.048589787 +0000 UTC m=+0.028478914 container create 4d4d64b5c6128f45e00dafadd76e3801ba4300cd558e03d06bcdb34fa6c18006 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_kirch, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:56:01 compute-0 systemd[1]: Started libpod-conmon-4d4d64b5c6128f45e00dafadd76e3801ba4300cd558e03d06bcdb34fa6c18006.scope.
Nov 25 09:56:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:56:01 compute-0 podman[263567]: 2025-11-25 09:56:01.104866213 +0000 UTC m=+0.084755349 container init 4d4d64b5c6128f45e00dafadd76e3801ba4300cd558e03d06bcdb34fa6c18006 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_kirch, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:56:01 compute-0 podman[263567]: 2025-11-25 09:56:01.110581617 +0000 UTC m=+0.090470733 container start 4d4d64b5c6128f45e00dafadd76e3801ba4300cd558e03d06bcdb34fa6c18006 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:56:01 compute-0 silly_kirch[263580]: 167 167
Nov 25 09:56:01 compute-0 podman[263567]: 2025-11-25 09:56:01.113792951 +0000 UTC m=+0.093682068 container attach 4d4d64b5c6128f45e00dafadd76e3801ba4300cd558e03d06bcdb34fa6c18006 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_kirch, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 09:56:01 compute-0 podman[263567]: 2025-11-25 09:56:01.11410027 +0000 UTC m=+0.093989387 container died 4d4d64b5c6128f45e00dafadd76e3801ba4300cd558e03d06bcdb34fa6c18006 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:56:01 compute-0 systemd[1]: libpod-4d4d64b5c6128f45e00dafadd76e3801ba4300cd558e03d06bcdb34fa6c18006.scope: Deactivated successfully.
Nov 25 09:56:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-033ac6ad42eeffe764beec096062ce94f2e2a16667c9f9b9c73c9d5f18623114-merged.mount: Deactivated successfully.
Nov 25 09:56:01 compute-0 podman[263567]: 2025-11-25 09:56:01.132564376 +0000 UTC m=+0.112453493 container remove 4d4d64b5c6128f45e00dafadd76e3801ba4300cd558e03d06bcdb34fa6c18006 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_kirch, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Nov 25 09:56:01 compute-0 podman[263567]: 2025-11-25 09:56:01.036449593 +0000 UTC m=+0.016338730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:56:01 compute-0 systemd[1]: libpod-conmon-4d4d64b5c6128f45e00dafadd76e3801ba4300cd558e03d06bcdb34fa6c18006.scope: Deactivated successfully.
Nov 25 09:56:01 compute-0 podman[263602]: 2025-11-25 09:56:01.26415861 +0000 UTC m=+0.030785694 container create 630449af49effeef3dd23878a8deaa6ce9244f89ff67091d173b74528bbfde97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:56:01 compute-0 systemd[1]: Started libpod-conmon-630449af49effeef3dd23878a8deaa6ce9244f89ff67091d173b74528bbfde97.scope.
Nov 25 09:56:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f161346bc4d708cab052d40428b046b6152f5f8b9527da120c69488e142f09c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f161346bc4d708cab052d40428b046b6152f5f8b9527da120c69488e142f09c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f161346bc4d708cab052d40428b046b6152f5f8b9527da120c69488e142f09c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f161346bc4d708cab052d40428b046b6152f5f8b9527da120c69488e142f09c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:56:01 compute-0 podman[263602]: 2025-11-25 09:56:01.325104267 +0000 UTC m=+0.091731351 container init 630449af49effeef3dd23878a8deaa6ce9244f89ff67091d173b74528bbfde97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:56:01 compute-0 podman[263602]: 2025-11-25 09:56:01.329950943 +0000 UTC m=+0.096578027 container start 630449af49effeef3dd23878a8deaa6ce9244f89ff67091d173b74528bbfde97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curie, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:56:01 compute-0 podman[263602]: 2025-11-25 09:56:01.331175641 +0000 UTC m=+0.097802746 container attach 630449af49effeef3dd23878a8deaa6ce9244f89ff67091d173b74528bbfde97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curie, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:56:01 compute-0 podman[263602]: 2025-11-25 09:56:01.252453034 +0000 UTC m=+0.019080138 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:56:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v744: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 318 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Nov 25 09:56:01 compute-0 adoring_curie[263616]: {}
Nov 25 09:56:01 compute-0 lvm[263694]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:56:01 compute-0 lvm[263694]: VG ceph_vg0 finished
Nov 25 09:56:01 compute-0 systemd[1]: libpod-630449af49effeef3dd23878a8deaa6ce9244f89ff67091d173b74528bbfde97.scope: Deactivated successfully.
Nov 25 09:56:01 compute-0 podman[263695]: 2025-11-25 09:56:01.856259842 +0000 UTC m=+0.018699020 container died 630449af49effeef3dd23878a8deaa6ce9244f89ff67091d173b74528bbfde97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:56:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f161346bc4d708cab052d40428b046b6152f5f8b9527da120c69488e142f09c8-merged.mount: Deactivated successfully.
Nov 25 09:56:01 compute-0 podman[263695]: 2025-11-25 09:56:01.877635356 +0000 UTC m=+0.040074514 container remove 630449af49effeef3dd23878a8deaa6ce9244f89ff67091d173b74528bbfde97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_curie, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:56:01 compute-0 systemd[1]: libpod-conmon-630449af49effeef3dd23878a8deaa6ce9244f89ff67091d173b74528bbfde97.scope: Deactivated successfully.
Nov 25 09:56:01 compute-0 sudo[263509]: pam_unix(sudo:session): session closed for user root
Nov 25 09:56:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:56:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:56:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:56:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:56:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:01.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:01 compute-0 sudo[263708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:56:01 compute-0 sudo[263708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:56:01 compute-0 sudo[263708]: pam_unix(sudo:session): session closed for user root
Nov 25 09:56:02 compute-0 sudo[263733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:56:02 compute-0 sudo[263733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:56:02 compute-0 sudo[263733]: pam_unix(sudo:session): session closed for user root
Nov 25 09:56:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:02.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:02 compute-0 ceph-mon[74207]: pgmap v744: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 318 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Nov 25 09:56:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:56:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:56:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:03 compute-0 nova_compute[253512]: 2025-11-25 09:56:03.239 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v745: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 31 op/s
Nov 25 09:56:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:03 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:56:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:03 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:56:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:03.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:03 compute-0 podman[263760]: 2025-11-25 09:56:03.990425492 +0000 UTC m=+0.047963505 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 09:56:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:04.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:04 compute-0 ceph-mon[74207]: pgmap v745: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 31 op/s
Nov 25 09:56:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/660601116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:04 compute-0 nova_compute[253512]: 2025-11-25 09:56:04.691 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:05.384 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:05.384 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:05.384 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v746: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 31 op/s
Nov 25 09:56:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:05.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:06.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:06 compute-0 ceph-mon[74207]: pgmap v746: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 31 op/s
Nov 25 09:56:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:07.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:07.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:07.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:07.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v747: 337 pgs: 337 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 24 KiB/s wr, 60 op/s
Nov 25 09:56:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:07.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:08 compute-0 nova_compute[253512]: 2025-11-25 09:56:08.161 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:08 compute-0 nova_compute[253512]: 2025-11-25 09:56:08.242 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:08 compute-0 nova_compute[253512]: 2025-11-25 09:56:08.256 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:08.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:08 compute-0 ceph-mon[74207]: pgmap v747: 337 pgs: 337 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 24 KiB/s wr, 60 op/s
Nov 25 09:56:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v748: 337 pgs: 337 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 12 KiB/s wr, 59 op/s
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 09:56:09 compute-0 nova_compute[253512]: 2025-11-25 09:56:09.693 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:09.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:56:10] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 25 09:56:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:56:10] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 25 09:56:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:10.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:10 compute-0 ceph-mon[74207]: pgmap v748: 337 pgs: 337 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 12 KiB/s wr, 59 op/s
Nov 25 09:56:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:10 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af23514f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:11 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v749: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 12 KiB/s wr, 60 op/s
Nov 25 09:56:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095611 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:56:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:11 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:11.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:12.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:12 compute-0 ceph-mon[74207]: pgmap v749: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 12 KiB/s wr, 60 op/s
Nov 25 09:56:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:12 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af23514f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:13 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:13 compute-0 nova_compute[253512]: 2025-11-25 09:56:13.219 253516 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064558.2180548, 05a3fbe7-a832-4fb6-ad57-bfdd256afc57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:56:13 compute-0 nova_compute[253512]: 2025-11-25 09:56:13.219 253516 INFO nova.compute.manager [-] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] VM Stopped (Lifecycle Event)
Nov 25 09:56:13 compute-0 nova_compute[253512]: 2025-11-25 09:56:13.244 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:13 compute-0 nova_compute[253512]: 2025-11-25 09:56:13.402 253516 DEBUG nova.compute.manager [None req-21cd55f4-7a7f-4433-9cd3-f9caa1d51a55 - - - - - -] [instance: 05a3fbe7-a832-4fb6-ad57-bfdd256afc57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:56:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v750: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Nov 25 09:56:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:13 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6e8001d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:13.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:14.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:14 compute-0 ceph-mon[74207]: pgmap v750: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Nov 25 09:56:14 compute-0 nova_compute[253512]: 2025-11-25 09:56:14.695 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:14 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:56:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:56:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:56:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:56:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:56:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:56:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:56:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:56:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:15 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v751: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Nov 25 09:56:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:56:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:15 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6e8001d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:15.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:16.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:16 compute-0 ceph-mon[74207]: pgmap v751: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Nov 25 09:56:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:16 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:17.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:17.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:17.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:17.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:17 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v752: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Nov 25 09:56:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:17 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:17.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:18 compute-0 nova_compute[253512]: 2025-11-25 09:56:18.247 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:18.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:18 compute-0 ceph-mon[74207]: pgmap v752: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Nov 25 09:56:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:18 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6e8002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:19 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6e8002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v753: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:56:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2161517121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/481927686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1833756951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:19 compute-0 nova_compute[253512]: 2025-11-25 09:56:19.696 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:19 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:19.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:56:20] "GET /metrics HTTP/1.1" 200 48446 "" "Prometheus/2.51.0"
Nov 25 09:56:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:56:20] "GET /metrics HTTP/1.1" 200 48446 "" "Prometheus/2.51.0"
Nov 25 09:56:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:20.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:20 compute-0 nova_compute[253512]: 2025-11-25 09:56:20.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:20 compute-0 ceph-mon[74207]: pgmap v753: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:56:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2766823891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:20 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af23514f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:21 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f40091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:21 compute-0 nova_compute[253512]: 2025-11-25 09:56:21.467 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:21 compute-0 nova_compute[253512]: 2025-11-25 09:56:21.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:21 compute-0 nova_compute[253512]: 2025-11-25 09:56:21.491 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:21 compute-0 nova_compute[253512]: 2025-11-25 09:56:21.491 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:21 compute-0 nova_compute[253512]: 2025-11-25 09:56:21.491 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:21 compute-0 nova_compute[253512]: 2025-11-25 09:56:21.491 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:56:21 compute-0 nova_compute[253512]: 2025-11-25 09:56:21.491 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v754: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:56:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:21 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f40091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:56:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1784564239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:21 compute-0 nova_compute[253512]: 2025-11-25 09:56:21.812 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:21 compute-0 podman[263834]: 2025-11-25 09:56:21.891864872 +0000 UTC m=+0.049256853 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 09:56:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:21.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:22 compute-0 nova_compute[253512]: 2025-11-25 09:56:22.005 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:56:22 compute-0 nova_compute[253512]: 2025-11-25 09:56:22.006 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4606MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:56:22 compute-0 nova_compute[253512]: 2025-11-25 09:56:22.007 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:22 compute-0 nova_compute[253512]: 2025-11-25 09:56:22.007 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:22 compute-0 nova_compute[253512]: 2025-11-25 09:56:22.059 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:56:22 compute-0 nova_compute[253512]: 2025-11-25 09:56:22.059 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:56:22 compute-0 nova_compute[253512]: 2025-11-25 09:56:22.080 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:22 compute-0 sudo[263852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:56:22 compute-0 sudo[263852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:56:22 compute-0 sudo[263852]: pam_unix(sudo:session): session closed for user root
Nov 25 09:56:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:22.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:56:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3008631376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:22 compute-0 nova_compute[253512]: 2025-11-25 09:56:22.428 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:22 compute-0 nova_compute[253512]: 2025-11-25 09:56:22.432 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:56:22 compute-0 nova_compute[253512]: 2025-11-25 09:56:22.448 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:56:22 compute-0 nova_compute[253512]: 2025-11-25 09:56:22.470 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:56:22 compute-0 nova_compute[253512]: 2025-11-25 09:56:22.470 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:22 compute-0 ceph-mon[74207]: pgmap v754: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 25 09:56:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1784564239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3008631376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:22 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:23 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:23 compute-0 nova_compute[253512]: 2025-11-25 09:56:23.248 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:23 compute-0 nova_compute[253512]: 2025-11-25 09:56:23.470 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:23 compute-0 nova_compute[253512]: 2025-11-25 09:56:23.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v755: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:56:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:23 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:23.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:24.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.502 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "e414c01f-d327-411b-9309-c4c4dabd5b4a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.502 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.514 253516 DEBUG nova.compute.manager [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.572 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.572 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.576 253516 DEBUG nova.virt.hardware [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.577 253516 INFO nova.compute.claims [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.640 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:24 compute-0 ceph-mon[74207]: pgmap v755: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.698 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:24 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:56:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1129092219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.964 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.967 253516 DEBUG nova.compute.provider_tree [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.980 253516 DEBUG nova.scheduler.client.report [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.993 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:24 compute-0 nova_compute[253512]: 2025-11-25 09:56:24.993 253516 DEBUG nova.compute.manager [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.027 253516 DEBUG nova.compute.manager [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.027 253516 DEBUG nova.network.neutron [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.043 253516 INFO nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.052 253516 DEBUG nova.compute.manager [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.136 253516 DEBUG nova.compute.manager [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.137 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.137 253516 INFO nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Creating image(s)
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.153 253516 DEBUG nova.storage.rbd_utils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image e414c01f-d327-411b-9309-c4c4dabd5b4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:56:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:25 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af23514f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.168 253516 DEBUG nova.storage.rbd_utils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image e414c01f-d327-411b-9309-c4c4dabd5b4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.182 253516 DEBUG nova.storage.rbd_utils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image e414c01f-d327-411b-9309-c4c4dabd5b4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.183 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.243 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.244 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.245 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.245 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.260 253516 DEBUG nova.storage.rbd_utils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image e414c01f-d327-411b-9309-c4c4dabd5b4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.262 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 e414c01f-d327-411b-9309-c4c4dabd5b4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.274 253516 DEBUG nova.policy [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.394 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 e414c01f-d327-411b-9309-c4c4dabd5b4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.437 253516 DEBUG nova.storage.rbd_utils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image e414c01f-d327-411b-9309-c4c4dabd5b4a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.482 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.483 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.483 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.487 253516 DEBUG nova.objects.instance [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid e414c01f-d327-411b-9309-c4c4dabd5b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:56:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v756: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.610 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.610 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.611 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.611 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.611 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Ensure instance console log exists: /var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.612 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.612 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:25 compute-0 nova_compute[253512]: 2025-11-25 09:56:25.612 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1129092219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:25 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f40091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:25.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:26 compute-0 nova_compute[253512]: 2025-11-25 09:56:26.061 253516 DEBUG nova.network.neutron [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Successfully created port: 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:56:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:26 compute-0 nova_compute[253512]: 2025-11-25 09:56:26.596 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:56:26 compute-0 ceph-mon[74207]: pgmap v756: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 09:56:26 compute-0 nova_compute[253512]: 2025-11-25 09:56:26.698 253516 DEBUG nova.network.neutron [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Successfully updated port: 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:56:26 compute-0 nova_compute[253512]: 2025-11-25 09:56:26.717 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:56:26 compute-0 nova_compute[253512]: 2025-11-25 09:56:26.717 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:56:26 compute-0 nova_compute[253512]: 2025-11-25 09:56:26.717 253516 DEBUG nova.network.neutron [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:56:26 compute-0 nova_compute[253512]: 2025-11-25 09:56:26.816 253516 DEBUG nova.compute.manager [req-24e7d962-e280-4faa-ab17-89c6dc68c452 req-a89c5710-f2fc-469d-a182-7f46bd2d48fc c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-changed-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:56:26 compute-0 nova_compute[253512]: 2025-11-25 09:56:26.817 253516 DEBUG nova.compute.manager [req-24e7d962-e280-4faa-ab17-89c6dc68c452 req-a89c5710-f2fc-469d-a182-7f46bd2d48fc c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Refreshing instance network info cache due to event network-changed-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:56:26 compute-0 nova_compute[253512]: 2025-11-25 09:56:26.817 253516 DEBUG oslo_concurrency.lockutils [req-24e7d962-e280-4faa-ab17-89c6dc68c452 req-a89c5710-f2fc-469d-a182-7f46bd2d48fc c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:56:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:26 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:26 compute-0 nova_compute[253512]: 2025-11-25 09:56:26.867 253516 DEBUG nova.network.neutron [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:56:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:27.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:27.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:27.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:27.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:27 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.446 253516 DEBUG nova.network.neutron [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updating instance_info_cache with network_info: [{"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.469 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.470 253516 DEBUG nova.compute.manager [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Instance network_info: |[{"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.470 253516 DEBUG oslo_concurrency.lockutils [req-24e7d962-e280-4faa-ab17-89c6dc68c452 req-a89c5710-f2fc-469d-a182-7f46bd2d48fc c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.470 253516 DEBUG nova.network.neutron [req-24e7d962-e280-4faa-ab17-89c6dc68c452 req-a89c5710-f2fc-469d-a182-7f46bd2d48fc c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Refreshing network info cache for port 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.472 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Start _get_guest_xml network_info=[{"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'encrypted': False, 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.476 253516 WARNING nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.487 253516 DEBUG nova.virt.libvirt.host [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.488 253516 DEBUG nova.virt.libvirt.host [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.491 253516 DEBUG nova.virt.libvirt.host [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.491 253516 DEBUG nova.virt.libvirt.host [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.491 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.491 253516 DEBUG nova.virt.hardware [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.492 253516 DEBUG nova.virt.hardware [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.492 253516 DEBUG nova.virt.hardware [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.492 253516 DEBUG nova.virt.hardware [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.492 253516 DEBUG nova.virt.hardware [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.492 253516 DEBUG nova.virt.hardware [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.493 253516 DEBUG nova.virt.hardware [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.493 253516 DEBUG nova.virt.hardware [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.493 253516 DEBUG nova.virt.hardware [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.493 253516 DEBUG nova.virt.hardware [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.493 253516 DEBUG nova.virt.hardware [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:56:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v757: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.495 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:27 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af23514f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:56:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154839492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.835 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.852 253516 DEBUG nova.storage.rbd_utils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image e414c01f-d327-411b-9309-c4c4dabd5b4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:56:27 compute-0 nova_compute[253512]: 2025-11-25 09:56:27.854 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:27.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:27 compute-0 podman[264133]: 2025-11-25 09:56:27.996551308 +0000 UTC m=+0.055254871 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 09:56:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:56:28 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3271148749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.194 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.196 253516 DEBUG nova.virt.libvirt.vif [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2009582697',display_name='tempest-TestNetworkBasicOps-server-2009582697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2009582697',id=6,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD21EiYZKhXbIpyaNEUjP1ulP9c0zDwkxr0Xxe9kxy5T7Kh/aZqrRNdEYeVYyDq7wYIqSwgggji3NCoHXpcuxZfFxnprvDIJCcOEcX/dIdfv+vRs+aEB3wFMQZGt8WdE2g==',key_name='tempest-TestNetworkBasicOps-1281314821',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-36v3wqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:56:25Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=e414c01f-d327-411b-9309-c4c4dabd5b4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.196 253516 DEBUG nova.network.os_vif_util [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.197 253516 DEBUG nova.network.os_vif_util [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:f5:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82,network=Network(1da31a90-4851-4e23-b49c-d37e40c75813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f3b9b60-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.198 253516 DEBUG nova.objects.instance [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid e414c01f-d327-411b-9309-c4c4dabd5b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.212 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:56:28 compute-0 nova_compute[253512]:   <uuid>e414c01f-d327-411b-9309-c4c4dabd5b4a</uuid>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   <name>instance-00000006</name>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   <memory>131072</memory>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   <vcpu>1</vcpu>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   <metadata>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <nova:name>tempest-TestNetworkBasicOps-server-2009582697</nova:name>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <nova:creationTime>2025-11-25 09:56:27</nova:creationTime>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <nova:flavor name="m1.nano">
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <nova:memory>128</nova:memory>
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <nova:disk>1</nova:disk>
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <nova:swap>0</nova:swap>
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       </nova:flavor>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <nova:owner>
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       </nova:owner>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <nova:ports>
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <nova:port uuid="7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82">
Nov 25 09:56:28 compute-0 nova_compute[253512]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:         </nova:port>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       </nova:ports>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     </nova:instance>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   </metadata>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   <sysinfo type="smbios">
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <system>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <entry name="serial">e414c01f-d327-411b-9309-c4c4dabd5b4a</entry>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <entry name="uuid">e414c01f-d327-411b-9309-c4c4dabd5b4a</entry>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     </system>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   </sysinfo>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   <os>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <boot dev="hd"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <smbios mode="sysinfo"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   </os>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   <features>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <acpi/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <apic/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <vmcoreinfo/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   </features>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   <clock offset="utc">
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <timer name="hpet" present="no"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   </clock>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   <cpu mode="host-model" match="exact">
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   </cpu>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   <devices>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <disk type="network" device="disk">
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <driver type="raw" cache="none"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <source protocol="rbd" name="vms/e414c01f-d327-411b-9309-c4c4dabd5b4a_disk">
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       </source>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <auth username="openstack">
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <target dev="vda" bus="virtio"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <disk type="network" device="cdrom">
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <driver type="raw" cache="none"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <source protocol="rbd" name="vms/e414c01f-d327-411b-9309-c4c4dabd5b4a_disk.config">
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       </source>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <auth username="openstack">
Nov 25 09:56:28 compute-0 nova_compute[253512]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <target dev="sda" bus="sata"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <interface type="ethernet">
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <mac address="fa:16:3e:03:f5:2a"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <model type="virtio"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <mtu size="1442"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <target dev="tap7f3b9b60-a3"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     </interface>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <serial type="pty">
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <log file="/var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a/console.log" append="off"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     </serial>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <video>
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <model type="virtio"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     </video>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <input type="tablet" bus="usb"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <rng model="virtio">
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     </rng>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <controller type="usb" index="0"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     <memballoon model="virtio">
Nov 25 09:56:28 compute-0 nova_compute[253512]:       <stats period="10"/>
Nov 25 09:56:28 compute-0 nova_compute[253512]:     </memballoon>
Nov 25 09:56:28 compute-0 nova_compute[253512]:   </devices>
Nov 25 09:56:28 compute-0 nova_compute[253512]: </domain>
Nov 25 09:56:28 compute-0 nova_compute[253512]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.213 253516 DEBUG nova.compute.manager [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Preparing to wait for external event network-vif-plugged-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.213 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.213 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.213 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.214 253516 DEBUG nova.virt.libvirt.vif [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2009582697',display_name='tempest-TestNetworkBasicOps-server-2009582697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2009582697',id=6,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD21EiYZKhXbIpyaNEUjP1ulP9c0zDwkxr0Xxe9kxy5T7Kh/aZqrRNdEYeVYyDq7wYIqSwgggji3NCoHXpcuxZfFxnprvDIJCcOEcX/dIdfv+vRs+aEB3wFMQZGt8WdE2g==',key_name='tempest-TestNetworkBasicOps-1281314821',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-36v3wqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:56:25Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=e414c01f-d327-411b-9309-c4c4dabd5b4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.214 253516 DEBUG nova.network.os_vif_util [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.214 253516 DEBUG nova.network.os_vif_util [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:f5:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82,network=Network(1da31a90-4851-4e23-b49c-d37e40c75813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f3b9b60-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.215 253516 DEBUG os_vif [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:f5:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82,network=Network(1da31a90-4851-4e23-b49c-d37e40c75813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f3b9b60-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.215 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.216 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.216 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.218 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.218 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f3b9b60-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.219 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f3b9b60-a3, col_values=(('external_ids', {'iface-id': '7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:f5:2a', 'vm-uuid': 'e414c01f-d327-411b-9309-c4c4dabd5b4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.219 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:28 compute-0 NetworkManager[48903]: <info>  [1764064588.2205] manager: (tap7f3b9b60-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.222 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.224 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.225 253516 INFO os_vif [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:f5:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82,network=Network(1da31a90-4851-4e23-b49c-d37e40c75813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f3b9b60-a3')
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.253 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.253 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.253 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:03:f5:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.253 253516 INFO nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Using config drive
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.270 253516 DEBUG nova.storage.rbd_utils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image e414c01f-d327-411b-9309-c4c4dabd5b4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:56:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:28.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.639 253516 INFO nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Creating config drive at /var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a/disk.config
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.643 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj9nrsq71 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.656 253516 DEBUG nova.network.neutron [req-24e7d962-e280-4faa-ab17-89c6dc68c452 req-a89c5710-f2fc-469d-a182-7f46bd2d48fc c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updated VIF entry in instance network info cache for port 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.657 253516 DEBUG nova.network.neutron [req-24e7d962-e280-4faa-ab17-89c6dc68c452 req-a89c5710-f2fc-469d-a182-7f46bd2d48fc c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updating instance_info_cache with network_info: [{"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.672 253516 DEBUG oslo_concurrency.lockutils [req-24e7d962-e280-4faa-ab17-89c6dc68c452 req-a89c5710-f2fc-469d-a182-7f46bd2d48fc c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:56:28 compute-0 ceph-mon[74207]: pgmap v757: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:56:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4154839492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:56:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3271148749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.761 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj9nrsq71" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.781 253516 DEBUG nova.storage.rbd_utils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image e414c01f-d327-411b-9309-c4c4dabd5b4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.783 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a/disk.config e414c01f-d327-411b-9309-c4c4dabd5b4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:56:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:28 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af23514f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.867 253516 DEBUG oslo_concurrency.processutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a/disk.config e414c01f-d327-411b-9309-c4c4dabd5b4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.867 253516 INFO nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Deleting local config drive /var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a/disk.config because it was imported into RBD.
Nov 25 09:56:28 compute-0 kernel: tap7f3b9b60-a3: entered promiscuous mode
Nov 25 09:56:28 compute-0 NetworkManager[48903]: <info>  [1764064588.8967] manager: (tap7f3b9b60-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Nov 25 09:56:28 compute-0 ovn_controller[155020]: 2025-11-25T09:56:28Z|00049|binding|INFO|Claiming lport 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 for this chassis.
Nov 25 09:56:28 compute-0 ovn_controller[155020]: 2025-11-25T09:56:28Z|00050|binding|INFO|7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82: Claiming fa:16:3e:03:f5:2a 10.100.0.6
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.902 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.906 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:28.913 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:f5:2a 10.100.0.6'], port_security=['fa:16:3e:03:f5:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e414c01f-d327-411b-9309-c4c4dabd5b4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da31a90-4851-4e23-b49c-d37e40c75813', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb028197-733c-4fbd-bd01-615e4c545aa9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09fba177-1b7b-4e1a-96ee-300569eeb103, chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:56:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:28.914 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 in datapath 1da31a90-4851-4e23-b49c-d37e40c75813 bound to our chassis
Nov 25 09:56:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:28.915 164791 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1da31a90-4851-4e23-b49c-d37e40c75813
Nov 25 09:56:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:28.922 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[a3dd23aa-f936-49f1-8146-ad482d63175d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:28.923 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1da31a90-41 in ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:56:28 compute-0 systemd-udevd[264251]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:56:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:28.927 258952 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1da31a90-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:56:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:28.927 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2a116b-4b12-4464-8d50-d50cf161e4b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:28.927 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[dd924ff9-0f65-41ac-8a19-39db0b1493cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:28 compute-0 systemd-machined[216497]: New machine qemu-3-instance-00000006.
Nov 25 09:56:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:28.938 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a62120-c0f1-4783-8dc8-c810400477cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:28 compute-0 NetworkManager[48903]: <info>  [1764064588.9399] device (tap7f3b9b60-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:56:28 compute-0 NetworkManager[48903]: <info>  [1764064588.9408] device (tap7f3b9b60-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:56:28 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000006.
Nov 25 09:56:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:28.958 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c365e7-8bb7-42d1-96f4-4642b6ead2f5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:28.977 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c29601-7db2-4d05-b492-d451a7f00cac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:28 compute-0 systemd-udevd[264255]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:56:28 compute-0 NetworkManager[48903]: <info>  [1764064588.9820] manager: (tap1da31a90-40): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Nov 25 09:56:28 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:28.982 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[4647f4da-826c-4b0b-aa97-dfd6a185c772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.986 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:28 compute-0 ovn_controller[155020]: 2025-11-25T09:56:28Z|00051|binding|INFO|Setting lport 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 ovn-installed in OVS
Nov 25 09:56:28 compute-0 ovn_controller[155020]: 2025-11-25T09:56:28Z|00052|binding|INFO|Setting lport 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 up in Southbound
Nov 25 09:56:28 compute-0 nova_compute[253512]: 2025-11-25 09:56:28.991 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.006 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[c731f966-a57e-455a-8219-bb8a2a81c61b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.008 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2fbe02-7dbe-4438-a4da-73c6b41cbf73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:29 compute-0 NetworkManager[48903]: <info>  [1764064589.0213] device (tap1da31a90-40): carrier: link connected
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.024 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[2e451a66-c629-45af-b8f3-0f8d637a0a4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.035 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[15ca76f4-002d-4ca6-bf33-def21865f4a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1da31a90-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:14:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 336959, 'reachable_time': 22557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264275, 'error': None, 'target': 'ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.043 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[aee05154-21db-4322-93a5-9b49378e4b9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:1490'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 336959, 'tstamp': 336959}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264276, 'error': None, 'target': 'ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.053 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[4bde5766-64ee-40d5-99c9-70e42772af72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1da31a90-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:14:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 336959, 'reachable_time': 22557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264277, 'error': None, 'target': 'ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.069 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[1365090d-d8fa-4e6b-b4ed-1de216493126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.097 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f925ee-f701-4c40-a920-fa64229bcff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.098 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1da31a90-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.099 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.099 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1da31a90-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:29 compute-0 kernel: tap1da31a90-40: entered promiscuous mode
Nov 25 09:56:29 compute-0 NetworkManager[48903]: <info>  [1764064589.1014] manager: (tap1da31a90-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.102 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.104 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1da31a90-40, col_values=(('external_ids', {'iface-id': '1198a2e0-5a95-4f4d-8225-c7b2e30ebbe1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:29 compute-0 ovn_controller[155020]: 2025-11-25T09:56:29Z|00053|binding|INFO|Releasing lport 1198a2e0-5a95-4f4d-8225-c7b2e30ebbe1 from this chassis (sb_readonly=0)
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.105 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.106 164791 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1da31a90-4851-4e23-b49c-d37e40c75813.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1da31a90-4851-4e23-b49c-d37e40c75813.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.106 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf8d693-dfad-4cab-8deb-feb4d6c3f9ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.107 164791 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: global
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     log         /dev/log local0 debug
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     log-tag     haproxy-metadata-proxy-1da31a90-4851-4e23-b49c-d37e40c75813
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     user        root
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     group       root
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     maxconn     1024
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     pidfile     /var/lib/neutron/external/pids/1da31a90-4851-4e23-b49c-d37e40c75813.pid.haproxy
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     daemon
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: defaults
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     log global
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     mode http
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     option httplog
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     option dontlognull
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     option http-server-close
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     option forwardfor
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     retries                 3
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     timeout http-request    30s
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     timeout connect         30s
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     timeout client          32s
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     timeout server          32s
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     timeout http-keep-alive 30s
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: listen listener
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     bind 169.254.169.254:80
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:     http-request add-header X-OVN-Network-ID 1da31a90-4851-4e23-b49c-d37e40c75813
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:56:29 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:29.107 164791 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813', 'env', 'PROCESS_TAG=haproxy-1da31a90-4851-4e23-b49c-d37e40c75813', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1da31a90-4851-4e23-b49c-d37e40c75813.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.120 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.166 253516 DEBUG nova.compute.manager [req-28d1caf7-db93-428c-af55-8884f82a6201 req-5d1cb476-4a55-4aa9-9e2c-5bd38c603c7e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-vif-plugged-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.166 253516 DEBUG oslo_concurrency.lockutils [req-28d1caf7-db93-428c-af55-8884f82a6201 req-5d1cb476-4a55-4aa9-9e2c-5bd38c603c7e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.166 253516 DEBUG oslo_concurrency.lockutils [req-28d1caf7-db93-428c-af55-8884f82a6201 req-5d1cb476-4a55-4aa9-9e2c-5bd38c603c7e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.166 253516 DEBUG oslo_concurrency.lockutils [req-28d1caf7-db93-428c-af55-8884f82a6201 req-5d1cb476-4a55-4aa9-9e2c-5bd38c603c7e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.167 253516 DEBUG nova.compute.manager [req-28d1caf7-db93-428c-af55-8884f82a6201 req-5d1cb476-4a55-4aa9-9e2c-5bd38c603c7e c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Processing event network-vif-plugged-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:56:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:29 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.313 253516 DEBUG nova.compute.manager [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.314 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064589.31321, e414c01f-d327-411b-9309-c4c4dabd5b4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.314 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] VM Started (Lifecycle Event)
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.318 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.320 253516 INFO nova.virt.libvirt.driver [-] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Instance spawned successfully.
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.320 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.335 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.339 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.342 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.342 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.342 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.343 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.343 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.344 253516 DEBUG nova.virt.libvirt.driver [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.364 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.364 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064589.3132882, e414c01f-d327-411b-9309-c4c4dabd5b4a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.364 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] VM Paused (Lifecycle Event)
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.385 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.386 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064589.317327, e414c01f-d327-411b-9309-c4c4dabd5b4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.387 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] VM Resumed (Lifecycle Event)
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.403 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:56:29 compute-0 podman[264347]: 2025-11-25 09:56:29.404139709 +0000 UTC m=+0.032369660 container create 67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.409 253516 INFO nova.compute.manager [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Took 4.27 seconds to spawn the instance on the hypervisor.
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.409 253516 DEBUG nova.compute.manager [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.410 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.430 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:56:29 compute-0 systemd[1]: Started libpod-conmon-67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7.scope.
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.457 253516 INFO nova.compute.manager [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Took 4.91 seconds to build instance.
Nov 25 09:56:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:56:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2027286fa675275eb4f949d66488a7381f6c7906a3492c20208baab7bd42706c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.466 253516 DEBUG oslo_concurrency.lockutils [None req-26c4a47a-4a7c-48c0-a8aa-03f1cf0917b0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:29 compute-0 podman[264347]: 2025-11-25 09:56:29.47022814 +0000 UTC m=+0.098458102 container init 67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 09:56:29 compute-0 podman[264347]: 2025-11-25 09:56:29.476045496 +0000 UTC m=+0.104275438 container start 67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 09:56:29 compute-0 podman[264347]: 2025-11-25 09:56:29.389658721 +0000 UTC m=+0.017888682 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:56:29 compute-0 neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813[264358]: [NOTICE]   (264362) : New worker (264364) forked
Nov 25 09:56:29 compute-0 neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813[264358]: [NOTICE]   (264362) : Loading success.
Nov 25 09:56:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v758: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:56:29 compute-0 nova_compute[253512]: 2025-11-25 09:56:29.699 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:29 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:56:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:56:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:29.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:56:30] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Nov 25 09:56:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:56:30] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Nov 25 09:56:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:30.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:30 compute-0 ceph-mon[74207]: pgmap v758: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:56:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:56:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:30 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af23514f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:31 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af23514f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:31 compute-0 nova_compute[253512]: 2025-11-25 09:56:31.245 253516 DEBUG nova.compute.manager [req-bdd5d70f-338c-4a3f-a09f-59102c23b925 req-2a1b203f-ab2b-4aeb-9bb2-ebccfb3b0ca8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-vif-plugged-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:56:31 compute-0 nova_compute[253512]: 2025-11-25 09:56:31.246 253516 DEBUG oslo_concurrency.lockutils [req-bdd5d70f-338c-4a3f-a09f-59102c23b925 req-2a1b203f-ab2b-4aeb-9bb2-ebccfb3b0ca8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:31 compute-0 nova_compute[253512]: 2025-11-25 09:56:31.246 253516 DEBUG oslo_concurrency.lockutils [req-bdd5d70f-338c-4a3f-a09f-59102c23b925 req-2a1b203f-ab2b-4aeb-9bb2-ebccfb3b0ca8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:31 compute-0 nova_compute[253512]: 2025-11-25 09:56:31.246 253516 DEBUG oslo_concurrency.lockutils [req-bdd5d70f-338c-4a3f-a09f-59102c23b925 req-2a1b203f-ab2b-4aeb-9bb2-ebccfb3b0ca8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:31 compute-0 nova_compute[253512]: 2025-11-25 09:56:31.247 253516 DEBUG nova.compute.manager [req-bdd5d70f-338c-4a3f-a09f-59102c23b925 req-2a1b203f-ab2b-4aeb-9bb2-ebccfb3b0ca8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] No waiting events found dispatching network-vif-plugged-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:56:31 compute-0 nova_compute[253512]: 2025-11-25 09:56:31.247 253516 WARNING nova.compute.manager [req-bdd5d70f-338c-4a3f-a09f-59102c23b925 req-2a1b203f-ab2b-4aeb-9bb2-ebccfb3b0ca8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received unexpected event network-vif-plugged-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 for instance with vm_state active and task_state None.
Nov 25 09:56:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v759: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:56:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:31 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec0060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:31.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:32.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:32 compute-0 ceph-mon[74207]: pgmap v759: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:56:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:32 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec0060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:33 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af23514f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:33 compute-0 nova_compute[253512]: 2025-11-25 09:56:33.220 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:33 compute-0 nova_compute[253512]: 2025-11-25 09:56:33.250 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:33 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:33.251 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:56:33 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:33.253 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:56:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v760: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:56:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:33 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af23514f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:33.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:34.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:34 compute-0 nova_compute[253512]: 2025-11-25 09:56:34.699 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:34 compute-0 ceph-mon[74207]: pgmap v760: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:56:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:34 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec0060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:35 compute-0 podman[264375]: 2025-11-25 09:56:35.000327599 +0000 UTC m=+0.064948372 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 09:56:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:35 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec0060f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:35 compute-0 NetworkManager[48903]: <info>  [1764064595.3198] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 25 09:56:35 compute-0 NetworkManager[48903]: <info>  [1764064595.3204] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 25 09:56:35 compute-0 nova_compute[253512]: 2025-11-25 09:56:35.319 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:35 compute-0 ovn_controller[155020]: 2025-11-25T09:56:35Z|00054|binding|INFO|Releasing lport 1198a2e0-5a95-4f4d-8225-c7b2e30ebbe1 from this chassis (sb_readonly=0)
Nov 25 09:56:35 compute-0 ovn_controller[155020]: 2025-11-25T09:56:35Z|00055|binding|INFO|Releasing lport 1198a2e0-5a95-4f4d-8225-c7b2e30ebbe1 from this chassis (sb_readonly=0)
Nov 25 09:56:35 compute-0 nova_compute[253512]: 2025-11-25 09:56:35.363 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:35 compute-0 nova_compute[253512]: 2025-11-25 09:56:35.366 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v761: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:56:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:35 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af23514f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:35.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:36 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:36.255 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:36 compute-0 nova_compute[253512]: 2025-11-25 09:56:36.345 253516 DEBUG nova.compute.manager [req-a7331c77-de9c-4632-a96a-267d134512f3 req-d486a624-574f-483b-8e5e-f1660233ee13 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-changed-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:56:36 compute-0 nova_compute[253512]: 2025-11-25 09:56:36.345 253516 DEBUG nova.compute.manager [req-a7331c77-de9c-4632-a96a-267d134512f3 req-d486a624-574f-483b-8e5e-f1660233ee13 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Refreshing instance network info cache due to event network-changed-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:56:36 compute-0 nova_compute[253512]: 2025-11-25 09:56:36.345 253516 DEBUG oslo_concurrency.lockutils [req-a7331c77-de9c-4632-a96a-267d134512f3 req-d486a624-574f-483b-8e5e-f1660233ee13 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:56:36 compute-0 nova_compute[253512]: 2025-11-25 09:56:36.346 253516 DEBUG oslo_concurrency.lockutils [req-a7331c77-de9c-4632-a96a-267d134512f3 req-d486a624-574f-483b-8e5e-f1660233ee13 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:56:36 compute-0 nova_compute[253512]: 2025-11-25 09:56:36.346 253516 DEBUG nova.network.neutron [req-a7331c77-de9c-4632-a96a-267d134512f3 req-d486a624-574f-483b-8e5e-f1660233ee13 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Refreshing network info cache for port 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:56:36 compute-0 ceph-mon[74207]: pgmap v761: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:56:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:36 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55af23514f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:37.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:37.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:37.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:37.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:37 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec006ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:37 compute-0 nova_compute[253512]: 2025-11-25 09:56:37.337 253516 DEBUG nova.network.neutron [req-a7331c77-de9c-4632-a96a-267d134512f3 req-d486a624-574f-483b-8e5e-f1660233ee13 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updated VIF entry in instance network info cache for port 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:56:37 compute-0 nova_compute[253512]: 2025-11-25 09:56:37.337 253516 DEBUG nova.network.neutron [req-a7331c77-de9c-4632-a96a-267d134512f3 req-d486a624-574f-483b-8e5e-f1660233ee13 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updating instance_info_cache with network_info: [{"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:56:37 compute-0 nova_compute[253512]: 2025-11-25 09:56:37.353 253516 DEBUG oslo_concurrency.lockutils [req-a7331c77-de9c-4632-a96a-267d134512f3 req-d486a624-574f-483b-8e5e-f1660233ee13 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:56:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v762: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:56:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:37 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec006ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 09:56:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:37.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 09:56:38 compute-0 nova_compute[253512]: 2025-11-25 09:56:38.222 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:38.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:38 compute-0 ceph-mon[74207]: pgmap v762: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:56:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:38 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:39 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v763: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:56:39 compute-0 nova_compute[253512]: 2025-11-25 09:56:39.701 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:39 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec006ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:40.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:56:40] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Nov 25 09:56:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:56:40] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Nov 25 09:56:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:40.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:40 compute-0 ceph-mon[74207]: pgmap v763: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 09:56:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:40 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec006ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:41 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:41 compute-0 ovn_controller[155020]: 2025-11-25T09:56:41Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:f5:2a 10.100.0.6
Nov 25 09:56:41 compute-0 ovn_controller[155020]: 2025-11-25T09:56:41Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:f5:2a 10.100.0.6
Nov 25 09:56:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v764: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 25 09:56:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:41 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:42.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:42 compute-0 sudo[264404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:56:42 compute-0 sudo[264404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:56:42 compute-0 sudo[264404]: pam_unix(sudo:session): session closed for user root
Nov 25 09:56:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:42.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:42 compute-0 ceph-mon[74207]: pgmap v764: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 25 09:56:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:42 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec006ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:43 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec006ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:43 compute-0 nova_compute[253512]: 2025-11-25 09:56:43.224 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v765: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:43 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb70c002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:44.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:44.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:44 compute-0 nova_compute[253512]: 2025-11-25 09:56:44.702 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:44 compute-0 ceph-mon[74207]: pgmap v765: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:44 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:56:44
Nov 25 09:56:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:56:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:56:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['backups', '.rgw.root', '.mgr', '.nfs', 'images', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 25 09:56:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:56:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:56:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:56:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:56:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:56:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:56:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:56:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:56:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:56:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:56:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:56:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:56:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:56:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:56:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:56:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:56:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:56:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:56:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:56:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:45 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v766: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:45 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:56:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:46.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:46.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:46 compute-0 nova_compute[253512]: 2025-11-25 09:56:46.641 253516 INFO nova.compute.manager [None req-b2992243-396f-4208-b3bc-23925e3ea243 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Get console output
Nov 25 09:56:46 compute-0 nova_compute[253512]: 2025-11-25 09:56:46.645 259829 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 09:56:46 compute-0 ceph-mon[74207]: pgmap v766: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:46 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb70c003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:47.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:47.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:47.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:47.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:47 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v767: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:47 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:48.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:48 compute-0 nova_compute[253512]: 2025-11-25 09:56:48.226 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:48.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:48 compute-0 ceph-mon[74207]: pgmap v767: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:48 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:49 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb70c003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:49 compute-0 nova_compute[253512]: 2025-11-25 09:56:49.349 253516 DEBUG oslo_concurrency.lockutils [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "interface-e414c01f-d327-411b-9309-c4c4dabd5b4a-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:49 compute-0 nova_compute[253512]: 2025-11-25 09:56:49.349 253516 DEBUG oslo_concurrency.lockutils [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "interface-e414c01f-d327-411b-9309-c4c4dabd5b4a-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:49 compute-0 nova_compute[253512]: 2025-11-25 09:56:49.349 253516 DEBUG nova.objects.instance [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'flavor' on Instance uuid e414c01f-d327-411b-9309-c4c4dabd5b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:56:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v768: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:49 compute-0 nova_compute[253512]: 2025-11-25 09:56:49.601 253516 DEBUG nova.objects.instance [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_requests' on Instance uuid e414c01f-d327-411b-9309-c4c4dabd5b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:56:49 compute-0 nova_compute[253512]: 2025-11-25 09:56:49.610 253516 DEBUG nova.network.neutron [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:56:49 compute-0 nova_compute[253512]: 2025-11-25 09:56:49.703 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:49 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:49 compute-0 nova_compute[253512]: 2025-11-25 09:56:49.771 253516 DEBUG nova.policy [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:56:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:50.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:56:50] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Nov 25 09:56:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:56:50] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Nov 25 09:56:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:50.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:50 compute-0 nova_compute[253512]: 2025-11-25 09:56:50.500 253516 DEBUG nova.network.neutron [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Successfully created port: b3599bd2-09f9-4143-abc8-745915f961e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:56:50 compute-0 ceph-mon[74207]: pgmap v768: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:50 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:51 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:51 compute-0 nova_compute[253512]: 2025-11-25 09:56:51.205 253516 DEBUG nova.network.neutron [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Successfully updated port: b3599bd2-09f9-4143-abc8-745915f961e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:56:51 compute-0 nova_compute[253512]: 2025-11-25 09:56:51.218 253516 DEBUG oslo_concurrency.lockutils [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:56:51 compute-0 nova_compute[253512]: 2025-11-25 09:56:51.219 253516 DEBUG oslo_concurrency.lockutils [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:56:51 compute-0 nova_compute[253512]: 2025-11-25 09:56:51.219 253516 DEBUG nova.network.neutron [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:56:51 compute-0 nova_compute[253512]: 2025-11-25 09:56:51.285 253516 DEBUG nova.compute.manager [req-2b33a6f2-f944-4630-af5d-028a3095c4c9 req-3087ba7c-a1f3-43fb-83cf-33f1e0e89a3a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-changed-b3599bd2-09f9-4143-abc8-745915f961e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:56:51 compute-0 nova_compute[253512]: 2025-11-25 09:56:51.286 253516 DEBUG nova.compute.manager [req-2b33a6f2-f944-4630-af5d-028a3095c4c9 req-3087ba7c-a1f3-43fb-83cf-33f1e0e89a3a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Refreshing instance network info cache due to event network-changed-b3599bd2-09f9-4143-abc8-745915f961e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:56:51 compute-0 nova_compute[253512]: 2025-11-25 09:56:51.286 253516 DEBUG oslo_concurrency.lockutils [req-2b33a6f2-f944-4630-af5d-028a3095c4c9 req-3087ba7c-a1f3-43fb-83cf-33f1e0e89a3a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:56:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v769: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 290 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:51 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:52 compute-0 podman[264440]: 2025-11-25 09:56:52.007521024 +0000 UTC m=+0.066847844 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 09:56:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:52.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:52.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.671 253516 DEBUG nova.network.neutron [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updating instance_info_cache with network_info: [{"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3599bd2-09f9-4143-abc8-745915f961e3", "address": "fa:16:3e:40:3a:8c", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3599bd2-09", "ovs_interfaceid": "b3599bd2-09f9-4143-abc8-745915f961e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.688 253516 DEBUG oslo_concurrency.lockutils [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.689 253516 DEBUG oslo_concurrency.lockutils [req-2b33a6f2-f944-4630-af5d-028a3095c4c9 req-3087ba7c-a1f3-43fb-83cf-33f1e0e89a3a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.689 253516 DEBUG nova.network.neutron [req-2b33a6f2-f944-4630-af5d-028a3095c4c9 req-3087ba7c-a1f3-43fb-83cf-33f1e0e89a3a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Refreshing network info cache for port b3599bd2-09f9-4143-abc8-745915f961e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.691 253516 DEBUG nova.virt.libvirt.vif [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2009582697',display_name='tempest-TestNetworkBasicOps-server-2009582697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2009582697',id=6,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD21EiYZKhXbIpyaNEUjP1ulP9c0zDwkxr0Xxe9kxy5T7Kh/aZqrRNdEYeVYyDq7wYIqSwgggji3NCoHXpcuxZfFxnprvDIJCcOEcX/dIdfv+vRs+aEB3wFMQZGt8WdE2g==',key_name='tempest-TestNetworkBasicOps-1281314821',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:56:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-36v3wqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:56:29Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=e414c01f-d327-411b-9309-c4c4dabd5b4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3599bd2-09f9-4143-abc8-745915f961e3", "address": "fa:16:3e:40:3a:8c", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3599bd2-09", "ovs_interfaceid": "b3599bd2-09f9-4143-abc8-745915f961e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.692 253516 DEBUG nova.network.os_vif_util [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "b3599bd2-09f9-4143-abc8-745915f961e3", "address": "fa:16:3e:40:3a:8c", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3599bd2-09", "ovs_interfaceid": "b3599bd2-09f9-4143-abc8-745915f961e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.692 253516 DEBUG nova.network.os_vif_util [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=b3599bd2-09f9-4143-abc8-745915f961e3,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3599bd2-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.693 253516 DEBUG os_vif [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=b3599bd2-09f9-4143-abc8-745915f961e3,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3599bd2-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.693 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.693 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.694 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.696 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.697 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3599bd2-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.697 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3599bd2-09, col_values=(('external_ids', {'iface-id': 'b3599bd2-09f9-4143-abc8-745915f961e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:3a:8c', 'vm-uuid': 'e414c01f-d327-411b-9309-c4c4dabd5b4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:52 compute-0 NetworkManager[48903]: <info>  [1764064612.6992] manager: (tapb3599bd2-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.704 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.705 253516 INFO os_vif [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=b3599bd2-09f9-4143-abc8-745915f961e3,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3599bd2-09')
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.706 253516 DEBUG nova.virt.libvirt.vif [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2009582697',display_name='tempest-TestNetworkBasicOps-server-2009582697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2009582697',id=6,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD21EiYZKhXbIpyaNEUjP1ulP9c0zDwkxr0Xxe9kxy5T7Kh/aZqrRNdEYeVYyDq7wYIqSwgggji3NCoHXpcuxZfFxnprvDIJCcOEcX/dIdfv+vRs+aEB3wFMQZGt8WdE2g==',key_name='tempest-TestNetworkBasicOps-1281314821',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:56:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-36v3wqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:56:29Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=e414c01f-d327-411b-9309-c4c4dabd5b4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3599bd2-09f9-4143-abc8-745915f961e3", "address": "fa:16:3e:40:3a:8c", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3599bd2-09", "ovs_interfaceid": "b3599bd2-09f9-4143-abc8-745915f961e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.706 253516 DEBUG nova.network.os_vif_util [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "b3599bd2-09f9-4143-abc8-745915f961e3", "address": "fa:16:3e:40:3a:8c", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3599bd2-09", "ovs_interfaceid": "b3599bd2-09f9-4143-abc8-745915f961e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.707 253516 DEBUG nova.network.os_vif_util [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=b3599bd2-09f9-4143-abc8-745915f961e3,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3599bd2-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.709 253516 DEBUG nova.virt.libvirt.guest [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] attach device xml: <interface type="ethernet">
Nov 25 09:56:52 compute-0 nova_compute[253512]:   <mac address="fa:16:3e:40:3a:8c"/>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   <model type="virtio"/>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   <mtu size="1442"/>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   <target dev="tapb3599bd2-09"/>
Nov 25 09:56:52 compute-0 nova_compute[253512]: </interface>
Nov 25 09:56:52 compute-0 nova_compute[253512]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 09:56:52 compute-0 kernel: tapb3599bd2-09: entered promiscuous mode
Nov 25 09:56:52 compute-0 NetworkManager[48903]: <info>  [1764064612.7172] manager: (tapb3599bd2-09): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.718 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:52 compute-0 ovn_controller[155020]: 2025-11-25T09:56:52Z|00056|binding|INFO|Claiming lport b3599bd2-09f9-4143-abc8-745915f961e3 for this chassis.
Nov 25 09:56:52 compute-0 ovn_controller[155020]: 2025-11-25T09:56:52Z|00057|binding|INFO|b3599bd2-09f9-4143-abc8-745915f961e3: Claiming fa:16:3e:40:3a:8c 10.100.0.24
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.726 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:3a:8c 10.100.0.24'], port_security=['fa:16:3e:40:3a:8c 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': 'e414c01f-d327-411b-9309-c4c4dabd5b4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77421187-f24b-4366-8c59-8fbcf4a8390c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e3e0a9c-90d8-4bb2-a9a5-b8401547fa81, chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=b3599bd2-09f9-4143-abc8-745915f961e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.728 164791 INFO neutron.agent.ovn.metadata.agent [-] Port b3599bd2-09f9-4143-abc8-745915f961e3 in datapath 23a0542a-b85d-40e7-8bd9-6ee0d43b0306 bound to our chassis
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.729 164791 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 23a0542a-b85d-40e7-8bd9-6ee0d43b0306
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.741 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[75ce0665-4f8e-4859-aba5-74b78102d1dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.742 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap23a0542a-b1 in ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.744 258952 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap23a0542a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.744 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[3a038b81-b1e1-420e-acc7-ce77c134d633]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.745 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[567d26f3-65cc-49ab-99cc-9912a5765790]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 systemd-udevd[264463]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.756 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[6a466399-334b-4acf-b0cf-f946b60d2c26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 NetworkManager[48903]: <info>  [1764064612.7585] device (tapb3599bd2-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:56:52 compute-0 NetworkManager[48903]: <info>  [1764064612.7594] device (tapb3599bd2-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:56:52 compute-0 ceph-mon[74207]: pgmap v769: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 290 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.778 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[d38452ca-8146-425e-88f4-cb7619ffe8aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.782 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:52 compute-0 ovn_controller[155020]: 2025-11-25T09:56:52Z|00058|binding|INFO|Setting lport b3599bd2-09f9-4143-abc8-745915f961e3 ovn-installed in OVS
Nov 25 09:56:52 compute-0 ovn_controller[155020]: 2025-11-25T09:56:52Z|00059|binding|INFO|Setting lport b3599bd2-09f9-4143-abc8-745915f961e3 up in Southbound
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.785 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.790 253516 DEBUG nova.virt.libvirt.driver [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.791 253516 DEBUG nova.virt.libvirt.driver [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.791 253516 DEBUG nova.virt.libvirt.driver [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:03:f5:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.791 253516 DEBUG nova.virt.libvirt.driver [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:40:3a:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.800 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d14fd9-5b96-422e-ab88-043a04dd1c45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 systemd-udevd[264466]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:56:52 compute-0 NetworkManager[48903]: <info>  [1764064612.8052] manager: (tap23a0542a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.805 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[7d922b7b-13ad-440c-9135-a3b830b4c950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.809 253516 DEBUG nova.virt.libvirt.guest [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:56:52 compute-0 nova_compute[253512]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   <nova:name>tempest-TestNetworkBasicOps-server-2009582697</nova:name>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   <nova:creationTime>2025-11-25 09:56:52</nova:creationTime>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   <nova:flavor name="m1.nano">
Nov 25 09:56:52 compute-0 nova_compute[253512]:     <nova:memory>128</nova:memory>
Nov 25 09:56:52 compute-0 nova_compute[253512]:     <nova:disk>1</nova:disk>
Nov 25 09:56:52 compute-0 nova_compute[253512]:     <nova:swap>0</nova:swap>
Nov 25 09:56:52 compute-0 nova_compute[253512]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:56:52 compute-0 nova_compute[253512]:     <nova:vcpus>1</nova:vcpus>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   </nova:flavor>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   <nova:owner>
Nov 25 09:56:52 compute-0 nova_compute[253512]:     <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 09:56:52 compute-0 nova_compute[253512]:     <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   </nova:owner>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   <nova:ports>
Nov 25 09:56:52 compute-0 nova_compute[253512]:     <nova:port uuid="7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82">
Nov 25 09:56:52 compute-0 nova_compute[253512]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 09:56:52 compute-0 nova_compute[253512]:     </nova:port>
Nov 25 09:56:52 compute-0 nova_compute[253512]:     <nova:port uuid="b3599bd2-09f9-4143-abc8-745915f961e3">
Nov 25 09:56:52 compute-0 nova_compute[253512]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 25 09:56:52 compute-0 nova_compute[253512]:     </nova:port>
Nov 25 09:56:52 compute-0 nova_compute[253512]:   </nova:ports>
Nov 25 09:56:52 compute-0 nova_compute[253512]: </nova:instance>
Nov 25 09:56:52 compute-0 nova_compute[253512]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.830 253516 DEBUG oslo_concurrency.lockutils [None req-8535f0c4-102f-40b9-838a-1569a8a00b88 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "interface-e414c01f-d327-411b-9309-c4c4dabd5b4a-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 3.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.833 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a1f829-0255-421d-b98e-0430ddb32afb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.836 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[39f49046-8c03-4d7b-b2c1-e4d28dfa4004]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 NetworkManager[48903]: <info>  [1764064612.8537] device (tap23a0542a-b0): carrier: link connected
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.857 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[f6fe0b6e-5bbb-4223-b256-f2a2ea98ec85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.869 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2e4c4d-6c68-4695-9739-e692a9360795]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23a0542a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:61:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 339343, 'reachable_time': 28392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264481, 'error': None, 'target': 'ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:52 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.881 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4b3d56-5d23-4435-9786-6bce356dde09]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:61b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 339343, 'tstamp': 339343}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264482, 'error': None, 'target': 'ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.895 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[91c7db34-fb74-40c7-ba01-7d36e63c5e20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23a0542a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:61:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 339343, 'reachable_time': 28392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264483, 'error': None, 'target': 'ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.915 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[afc9b605-c754-4001-9efb-7996b3a9d0bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.952 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[b66b9124-d77c-4ddf-b5c4-030279f5ec2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.953 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23a0542a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.953 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.954 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23a0542a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:52 compute-0 NetworkManager[48903]: <info>  [1764064612.9559] manager: (tap23a0542a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.956 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:52 compute-0 kernel: tap23a0542a-b0: entered promiscuous mode
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.960 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.960 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap23a0542a-b0, col_values=(('external_ids', {'iface-id': '6cdb5dbb-946e-4292-9f33-2e4e1c3771ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:56:52 compute-0 ovn_controller[155020]: 2025-11-25T09:56:52Z|00060|binding|INFO|Releasing lport 6cdb5dbb-946e-4292-9f33-2e4e1c3771ee from this chassis (sb_readonly=0)
Nov 25 09:56:52 compute-0 nova_compute[253512]: 2025-11-25 09:56:52.977 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.978 164791 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/23a0542a-b85d-40e7-8bd9-6ee0d43b0306.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/23a0542a-b85d-40e7-8bd9-6ee0d43b0306.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.978 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb0bc4e-8fea-40ed-ab99-be62f4f34e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.979 164791 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: global
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     log         /dev/log local0 debug
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     log-tag     haproxy-metadata-proxy-23a0542a-b85d-40e7-8bd9-6ee0d43b0306
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     user        root
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     group       root
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     maxconn     1024
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     pidfile     /var/lib/neutron/external/pids/23a0542a-b85d-40e7-8bd9-6ee0d43b0306.pid.haproxy
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     daemon
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: defaults
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     log global
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     mode http
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     option httplog
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     option dontlognull
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     option http-server-close
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     option forwardfor
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     retries                 3
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     timeout http-request    30s
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     timeout connect         30s
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     timeout client          32s
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     timeout server          32s
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     timeout http-keep-alive 30s
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: listen listener
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     bind 169.254.169.254:80
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:     http-request add-header X-OVN-Network-ID 23a0542a-b85d-40e7-8bd9-6ee0d43b0306
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:56:52 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:56:52.981 164791 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'env', 'PROCESS_TAG=haproxy-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/23a0542a-b85d-40e7-8bd9-6ee0d43b0306.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:56:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:53 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:53 compute-0 podman[264511]: 2025-11-25 09:56:53.264859332 +0000 UTC m=+0.032522217 container create 9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 09:56:53 compute-0 systemd[1]: Started libpod-conmon-9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e.scope.
Nov 25 09:56:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:56:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7f83b9c37fe676da180aa03b11c32fbf080c4a7d5bf6588d8cec27b6712b6f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:56:53 compute-0 podman[264511]: 2025-11-25 09:56:53.32566208 +0000 UTC m=+0.093324965 container init 9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 09:56:53 compute-0 podman[264511]: 2025-11-25 09:56:53.329604061 +0000 UTC m=+0.097266946 container start 9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 09:56:53 compute-0 podman[264511]: 2025-11-25 09:56:53.250343279 +0000 UTC m=+0.018006175 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:56:53 compute-0 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[264523]: [NOTICE]   (264527) : New worker (264529) forked
Nov 25 09:56:53 compute-0 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[264523]: [NOTICE]   (264527) : Loading success.
Nov 25 09:56:53 compute-0 nova_compute[253512]: 2025-11-25 09:56:53.373 253516 DEBUG nova.compute.manager [req-20d6b0e9-f39b-4979-8c8c-7da2c42ab660 req-ca37952a-12f5-4845-9447-fbc1e7e8a489 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-vif-plugged-b3599bd2-09f9-4143-abc8-745915f961e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:56:53 compute-0 nova_compute[253512]: 2025-11-25 09:56:53.373 253516 DEBUG oslo_concurrency.lockutils [req-20d6b0e9-f39b-4979-8c8c-7da2c42ab660 req-ca37952a-12f5-4845-9447-fbc1e7e8a489 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:53 compute-0 nova_compute[253512]: 2025-11-25 09:56:53.374 253516 DEBUG oslo_concurrency.lockutils [req-20d6b0e9-f39b-4979-8c8c-7da2c42ab660 req-ca37952a-12f5-4845-9447-fbc1e7e8a489 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:53 compute-0 nova_compute[253512]: 2025-11-25 09:56:53.374 253516 DEBUG oslo_concurrency.lockutils [req-20d6b0e9-f39b-4979-8c8c-7da2c42ab660 req-ca37952a-12f5-4845-9447-fbc1e7e8a489 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:53 compute-0 nova_compute[253512]: 2025-11-25 09:56:53.374 253516 DEBUG nova.compute.manager [req-20d6b0e9-f39b-4979-8c8c-7da2c42ab660 req-ca37952a-12f5-4845-9447-fbc1e7e8a489 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] No waiting events found dispatching network-vif-plugged-b3599bd2-09f9-4143-abc8-745915f961e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:56:53 compute-0 nova_compute[253512]: 2025-11-25 09:56:53.375 253516 WARNING nova.compute.manager [req-20d6b0e9-f39b-4979-8c8c-7da2c42ab660 req-ca37952a-12f5-4845-9447-fbc1e7e8a489 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received unexpected event network-vif-plugged-b3599bd2-09f9-4143-abc8-745915f961e3 for instance with vm_state active and task_state None.
Nov 25 09:56:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v770: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 11 KiB/s wr, 0 op/s
Nov 25 09:56:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:53 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:53 compute-0 nova_compute[253512]: 2025-11-25 09:56:53.799 253516 DEBUG nova.network.neutron [req-2b33a6f2-f944-4630-af5d-028a3095c4c9 req-3087ba7c-a1f3-43fb-83cf-33f1e0e89a3a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updated VIF entry in instance network info cache for port b3599bd2-09f9-4143-abc8-745915f961e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:56:53 compute-0 nova_compute[253512]: 2025-11-25 09:56:53.800 253516 DEBUG nova.network.neutron [req-2b33a6f2-f944-4630-af5d-028a3095c4c9 req-3087ba7c-a1f3-43fb-83cf-33f1e0e89a3a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updating instance_info_cache with network_info: [{"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3599bd2-09f9-4143-abc8-745915f961e3", "address": "fa:16:3e:40:3a:8c", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3599bd2-09", "ovs_interfaceid": "b3599bd2-09f9-4143-abc8-745915f961e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:56:53 compute-0 nova_compute[253512]: 2025-11-25 09:56:53.817 253516 DEBUG oslo_concurrency.lockutils [req-2b33a6f2-f944-4630-af5d-028a3095c4c9 req-3087ba7c-a1f3-43fb-83cf-33f1e0e89a3a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:56:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:54.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:54.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:54 compute-0 ovn_controller[155020]: 2025-11-25T09:56:54Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:3a:8c 10.100.0.24
Nov 25 09:56:54 compute-0 ovn_controller[155020]: 2025-11-25T09:56:54Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:3a:8c 10.100.0.24
Nov 25 09:56:54 compute-0 nova_compute[253512]: 2025-11-25 09:56:54.705 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:54 compute-0 ceph-mon[74207]: pgmap v770: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 11 KiB/s wr, 0 op/s
Nov 25 09:56:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3382139923' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:56:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3382139923' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:56:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:54 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:55 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007587643257146578 of space, bias 1.0, pg target 0.22762929771439736 quantized to 32 (current 32)
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:56:55 compute-0 nova_compute[253512]: 2025-11-25 09:56:55.450 253516 DEBUG nova.compute.manager [req-108dc60c-8b93-49d1-977e-5766a58b57ed req-4ba34132-c426-425b-8e5e-8572c98acfd6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-vif-plugged-b3599bd2-09f9-4143-abc8-745915f961e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:56:55 compute-0 nova_compute[253512]: 2025-11-25 09:56:55.450 253516 DEBUG oslo_concurrency.lockutils [req-108dc60c-8b93-49d1-977e-5766a58b57ed req-4ba34132-c426-425b-8e5e-8572c98acfd6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:56:55 compute-0 nova_compute[253512]: 2025-11-25 09:56:55.450 253516 DEBUG oslo_concurrency.lockutils [req-108dc60c-8b93-49d1-977e-5766a58b57ed req-4ba34132-c426-425b-8e5e-8572c98acfd6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:56:55 compute-0 nova_compute[253512]: 2025-11-25 09:56:55.451 253516 DEBUG oslo_concurrency.lockutils [req-108dc60c-8b93-49d1-977e-5766a58b57ed req-4ba34132-c426-425b-8e5e-8572c98acfd6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:56:55 compute-0 nova_compute[253512]: 2025-11-25 09:56:55.451 253516 DEBUG nova.compute.manager [req-108dc60c-8b93-49d1-977e-5766a58b57ed req-4ba34132-c426-425b-8e5e-8572c98acfd6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] No waiting events found dispatching network-vif-plugged-b3599bd2-09f9-4143-abc8-745915f961e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:56:55 compute-0 nova_compute[253512]: 2025-11-25 09:56:55.451 253516 WARNING nova.compute.manager [req-108dc60c-8b93-49d1-977e-5766a58b57ed req-4ba34132-c426-425b-8e5e-8572c98acfd6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received unexpected event network-vif-plugged-b3599bd2-09f9-4143-abc8-745915f961e3 for instance with vm_state active and task_state None.
Nov 25 09:56:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v771: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 11 KiB/s wr, 0 op/s
Nov 25 09:56:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:55 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:56.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:56.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:56 compute-0 ceph-mon[74207]: pgmap v771: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 11 KiB/s wr, 0 op/s
Nov 25 09:56:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:56 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:57.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:57.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:57.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:56:57.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:56:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:57 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v772: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 09:56:57 compute-0 nova_compute[253512]: 2025-11-25 09:56:57.699 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:57 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:56:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:56:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:58.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:56:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:56:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:56:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:58.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:56:58 compute-0 ceph-mon[74207]: pgmap v772: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 09:56:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:58 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:58 compute-0 podman[264540]: 2025-11-25 09:56:58.992271829 +0000 UTC m=+0.058126153 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:56:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:59 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v773: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 4.7 KiB/s wr, 0 op/s
Nov 25 09:56:59 compute-0 nova_compute[253512]: 2025-11-25 09:56:59.706 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:56:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:56:59 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:56:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4012642929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:56:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:56:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:57:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:00.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:57:00] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Nov 25 09:57:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:57:00] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Nov 25 09:57:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:00.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:00 compute-0 ceph-mon[74207]: pgmap v773: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 4.7 KiB/s wr, 0 op/s
Nov 25 09:57:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:57:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:00 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:01 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v774: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:57:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:01 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:02.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:02 compute-0 sudo[264567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:57:02 compute-0 sudo[264567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:02 compute-0 sudo[264567]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:02 compute-0 sudo[264592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:57:02 compute-0 sudo[264592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:02 compute-0 sudo[264617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:57:02 compute-0 sudo[264617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:02 compute-0 sudo[264617]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:02.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:02 compute-0 sudo[264592]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:02 compute-0 nova_compute[253512]: 2025-11-25 09:57:02.700 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:02 compute-0 ceph-mon[74207]: pgmap v774: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:57:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:02 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:03 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v775: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:57:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:03 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:03 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3364850300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:57:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 09:57:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 09:57:03 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:04.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:04.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:57:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:57:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:57:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:57:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:57:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:57:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:57:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:57:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:57:04 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:57:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:57:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:57:04 compute-0 sudo[264673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:57:04 compute-0 sudo[264673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:04 compute-0 sudo[264673]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:04 compute-0 sudo[264698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:57:04 compute-0 sudo[264698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:04 compute-0 nova_compute[253512]: 2025-11-25 09:57:04.709 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:04 compute-0 ceph-mon[74207]: pgmap v775: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:57:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1248720210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:57:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:57:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:57:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:57:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:57:04 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:57:04 compute-0 podman[264755]: 2025-11-25 09:57:04.838811021 +0000 UTC m=+0.028352658 container create 4de70964f98f8cc8a19d74f68683ab5896496e617572e336e578cc6776523f72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_saha, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:57:04 compute-0 systemd[1]: Started libpod-conmon-4de70964f98f8cc8a19d74f68683ab5896496e617572e336e578cc6776523f72.scope.
Nov 25 09:57:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:04 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:57:04 compute-0 podman[264755]: 2025-11-25 09:57:04.901498049 +0000 UTC m=+0.091039687 container init 4de70964f98f8cc8a19d74f68683ab5896496e617572e336e578cc6776523f72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_saha, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 09:57:04 compute-0 podman[264755]: 2025-11-25 09:57:04.906354804 +0000 UTC m=+0.095896442 container start 4de70964f98f8cc8a19d74f68683ab5896496e617572e336e578cc6776523f72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_saha, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:57:04 compute-0 podman[264755]: 2025-11-25 09:57:04.907809788 +0000 UTC m=+0.097351425 container attach 4de70964f98f8cc8a19d74f68683ab5896496e617572e336e578cc6776523f72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_saha, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:57:04 compute-0 happy_saha[264768]: 167 167
Nov 25 09:57:04 compute-0 systemd[1]: libpod-4de70964f98f8cc8a19d74f68683ab5896496e617572e336e578cc6776523f72.scope: Deactivated successfully.
Nov 25 09:57:04 compute-0 podman[264755]: 2025-11-25 09:57:04.910126304 +0000 UTC m=+0.099667942 container died 4de70964f98f8cc8a19d74f68683ab5896496e617572e336e578cc6776523f72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_saha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 25 09:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec1df373380dd8155164262fccbec2265f94b4860f982565556e9c36d8ae1832-merged.mount: Deactivated successfully.
Nov 25 09:57:04 compute-0 podman[264755]: 2025-11-25 09:57:04.826819715 +0000 UTC m=+0.016361353 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:57:04 compute-0 podman[264755]: 2025-11-25 09:57:04.931726012 +0000 UTC m=+0.121267649 container remove 4de70964f98f8cc8a19d74f68683ab5896496e617572e336e578cc6776523f72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 09:57:04 compute-0 systemd[1]: libpod-conmon-4de70964f98f8cc8a19d74f68683ab5896496e617572e336e578cc6776523f72.scope: Deactivated successfully.
Nov 25 09:57:05 compute-0 podman[264791]: 2025-11-25 09:57:05.066571945 +0000 UTC m=+0.029510419 container create f2d930d292531f09bf174fcb5518dea0078908272d9082baef06c0d1fbb55332 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_gauss, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 09:57:05 compute-0 systemd[1]: Started libpod-conmon-f2d930d292531f09bf174fcb5518dea0078908272d9082baef06c0d1fbb55332.scope.
Nov 25 09:57:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6c5e4a7dfe796125240a8ed189261a077fd521317697e78a8a5f42cf919f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6c5e4a7dfe796125240a8ed189261a077fd521317697e78a8a5f42cf919f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6c5e4a7dfe796125240a8ed189261a077fd521317697e78a8a5f42cf919f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6c5e4a7dfe796125240a8ed189261a077fd521317697e78a8a5f42cf919f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6c5e4a7dfe796125240a8ed189261a077fd521317697e78a8a5f42cf919f7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:05 compute-0 podman[264791]: 2025-11-25 09:57:05.121196106 +0000 UTC m=+0.084134580 container init f2d930d292531f09bf174fcb5518dea0078908272d9082baef06c0d1fbb55332 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 09:57:05 compute-0 podman[264791]: 2025-11-25 09:57:05.126958919 +0000 UTC m=+0.089897383 container start f2d930d292531f09bf174fcb5518dea0078908272d9082baef06c0d1fbb55332 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 25 09:57:05 compute-0 podman[264791]: 2025-11-25 09:57:05.128350762 +0000 UTC m=+0.091289226 container attach f2d930d292531f09bf174fcb5518dea0078908272d9082baef06c0d1fbb55332 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_gauss, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:57:05 compute-0 podman[264801]: 2025-11-25 09:57:05.143577616 +0000 UTC m=+0.053280889 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 09:57:05 compute-0 podman[264791]: 2025-11-25 09:57:05.055379788 +0000 UTC m=+0.018318262 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:57:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:05 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040059b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:05 compute-0 hardcore_gauss[264805]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:57:05 compute-0 hardcore_gauss[264805]: --> All data devices are unavailable
Nov 25 09:57:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:57:05.384 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:57:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:57:05.385 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:57:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:57:05.386 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:57:05 compute-0 systemd[1]: libpod-f2d930d292531f09bf174fcb5518dea0078908272d9082baef06c0d1fbb55332.scope: Deactivated successfully.
Nov 25 09:57:05 compute-0 conmon[264805]: conmon f2d930d292531f09bf17 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2d930d292531f09bf174fcb5518dea0078908272d9082baef06c0d1fbb55332.scope/container/memory.events
Nov 25 09:57:05 compute-0 podman[264791]: 2025-11-25 09:57:05.398698186 +0000 UTC m=+0.361636650 container died f2d930d292531f09bf174fcb5518dea0078908272d9082baef06c0d1fbb55332 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:57:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4c6c5e4a7dfe796125240a8ed189261a077fd521317697e78a8a5f42cf919f7-merged.mount: Deactivated successfully.
Nov 25 09:57:05 compute-0 podman[264791]: 2025-11-25 09:57:05.422848091 +0000 UTC m=+0.385786555 container remove f2d930d292531f09bf174fcb5518dea0078908272d9082baef06c0d1fbb55332 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_gauss, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:57:05 compute-0 systemd[1]: libpod-conmon-f2d930d292531f09bf174fcb5518dea0078908272d9082baef06c0d1fbb55332.scope: Deactivated successfully.
Nov 25 09:57:05 compute-0 sudo[264698]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v776: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:57:05 compute-0 sudo[264847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:57:05 compute-0 sudo[264847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:05 compute-0 sudo[264847]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:05 compute-0 sudo[264872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:57:05 compute-0 sudo[264872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:05 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:05 compute-0 podman[264928]: 2025-11-25 09:57:05.859784556 +0000 UTC m=+0.027924702 container create dae362c1cc599291b7de20086f34993f80a6b7e38c2bd94bf242e8248202d5a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 09:57:05 compute-0 systemd[1]: Started libpod-conmon-dae362c1cc599291b7de20086f34993f80a6b7e38c2bd94bf242e8248202d5a2.scope.
Nov 25 09:57:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:57:05 compute-0 podman[264928]: 2025-11-25 09:57:05.917964169 +0000 UTC m=+0.086104325 container init dae362c1cc599291b7de20086f34993f80a6b7e38c2bd94bf242e8248202d5a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:57:05 compute-0 podman[264928]: 2025-11-25 09:57:05.922986886 +0000 UTC m=+0.091127033 container start dae362c1cc599291b7de20086f34993f80a6b7e38c2bd94bf242e8248202d5a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 09:57:05 compute-0 wonderful_yalow[264941]: 167 167
Nov 25 09:57:05 compute-0 systemd[1]: libpod-dae362c1cc599291b7de20086f34993f80a6b7e38c2bd94bf242e8248202d5a2.scope: Deactivated successfully.
Nov 25 09:57:05 compute-0 podman[264928]: 2025-11-25 09:57:05.92437322 +0000 UTC m=+0.092513366 container attach dae362c1cc599291b7de20086f34993f80a6b7e38c2bd94bf242e8248202d5a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_yalow, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 25 09:57:05 compute-0 conmon[264941]: conmon dae362c1cc599291b7de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dae362c1cc599291b7de20086f34993f80a6b7e38c2bd94bf242e8248202d5a2.scope/container/memory.events
Nov 25 09:57:05 compute-0 podman[264928]: 2025-11-25 09:57:05.926960577 +0000 UTC m=+0.095100724 container died dae362c1cc599291b7de20086f34993f80a6b7e38c2bd94bf242e8248202d5a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 25 09:57:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-582d2c9c5b61abcf0b65c6e37a741bfa221b4d56337f485556dba45c63121b4f-merged.mount: Deactivated successfully.
Nov 25 09:57:05 compute-0 podman[264928]: 2025-11-25 09:57:05.944655043 +0000 UTC m=+0.112795179 container remove dae362c1cc599291b7de20086f34993f80a6b7e38c2bd94bf242e8248202d5a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_yalow, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:57:05 compute-0 podman[264928]: 2025-11-25 09:57:05.849324728 +0000 UTC m=+0.017464895 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:57:05 compute-0 systemd[1]: libpod-conmon-dae362c1cc599291b7de20086f34993f80a6b7e38c2bd94bf242e8248202d5a2.scope: Deactivated successfully.
Nov 25 09:57:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:06.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:06 compute-0 podman[264964]: 2025-11-25 09:57:06.082341792 +0000 UTC m=+0.028981163 container create 520ddd3f0d85ffb02b0c3c7c81c34a33fa24c2cb3509b31eaff0d759cd981e8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_moore, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:57:06 compute-0 systemd[1]: Started libpod-conmon-520ddd3f0d85ffb02b0c3c7c81c34a33fa24c2cb3509b31eaff0d759cd981e8d.scope.
Nov 25 09:57:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304194894aaf1cd761d25c0c5bc8f78ef54f458165aa977afe2eeca8f7e35a44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304194894aaf1cd761d25c0c5bc8f78ef54f458165aa977afe2eeca8f7e35a44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304194894aaf1cd761d25c0c5bc8f78ef54f458165aa977afe2eeca8f7e35a44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304194894aaf1cd761d25c0c5bc8f78ef54f458165aa977afe2eeca8f7e35a44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:06 compute-0 podman[264964]: 2025-11-25 09:57:06.144520753 +0000 UTC m=+0.091160145 container init 520ddd3f0d85ffb02b0c3c7c81c34a33fa24c2cb3509b31eaff0d759cd981e8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_moore, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:57:06 compute-0 podman[264964]: 2025-11-25 09:57:06.149957722 +0000 UTC m=+0.096597093 container start 520ddd3f0d85ffb02b0c3c7c81c34a33fa24c2cb3509b31eaff0d759cd981e8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_moore, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:57:06 compute-0 podman[264964]: 2025-11-25 09:57:06.15113958 +0000 UTC m=+0.097778971 container attach 520ddd3f0d85ffb02b0c3c7c81c34a33fa24c2cb3509b31eaff0d759cd981e8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_moore, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 25 09:57:06 compute-0 podman[264964]: 2025-11-25 09:57:06.069654415 +0000 UTC m=+0.016293806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:57:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:06.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:06 compute-0 stupefied_moore[264977]: {
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:     "1": [
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:         {
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             "devices": [
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "/dev/loop3"
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             ],
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             "lv_name": "ceph_lv0",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             "lv_size": "21470642176",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             "name": "ceph_lv0",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             "tags": {
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.cluster_name": "ceph",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.crush_device_class": "",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.encrypted": "0",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.osd_id": "1",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.type": "block",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.vdo": "0",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:                 "ceph.with_tpm": "0"
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             },
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             "type": "block",
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:             "vg_name": "ceph_vg0"
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:         }
Nov 25 09:57:06 compute-0 stupefied_moore[264977]:     ]
Nov 25 09:57:06 compute-0 stupefied_moore[264977]: }
Nov 25 09:57:06 compute-0 systemd[1]: libpod-520ddd3f0d85ffb02b0c3c7c81c34a33fa24c2cb3509b31eaff0d759cd981e8d.scope: Deactivated successfully.
Nov 25 09:57:06 compute-0 podman[264964]: 2025-11-25 09:57:06.396888965 +0000 UTC m=+0.343528346 container died 520ddd3f0d85ffb02b0c3c7c81c34a33fa24c2cb3509b31eaff0d759cd981e8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 25 09:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-304194894aaf1cd761d25c0c5bc8f78ef54f458165aa977afe2eeca8f7e35a44-merged.mount: Deactivated successfully.
Nov 25 09:57:06 compute-0 podman[264964]: 2025-11-25 09:57:06.423285274 +0000 UTC m=+0.369924645 container remove 520ddd3f0d85ffb02b0c3c7c81c34a33fa24c2cb3509b31eaff0d759cd981e8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_moore, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:57:06 compute-0 systemd[1]: libpod-conmon-520ddd3f0d85ffb02b0c3c7c81c34a33fa24c2cb3509b31eaff0d759cd981e8d.scope: Deactivated successfully.
Nov 25 09:57:06 compute-0 sudo[264872]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:06 compute-0 sudo[264995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:57:06 compute-0 sudo[264995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:06 compute-0 sudo[264995]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:06 compute-0 sudo[265020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:57:06 compute-0 sudo[265020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:06 compute-0 ceph-mon[74207]: pgmap v776: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 09:57:06 compute-0 podman[265077]: 2025-11-25 09:57:06.840110479 +0000 UTC m=+0.027871520 container create 315014b24243bb07dc1accdfe1c2b53d3c8521976e002cde7a4971ab5061e0cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:57:06 compute-0 systemd[1]: Started libpod-conmon-315014b24243bb07dc1accdfe1c2b53d3c8521976e002cde7a4971ab5061e0cc.scope.
Nov 25 09:57:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:57:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:06 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:06 compute-0 podman[265077]: 2025-11-25 09:57:06.898296583 +0000 UTC m=+0.086057625 container init 315014b24243bb07dc1accdfe1c2b53d3c8521976e002cde7a4971ab5061e0cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 09:57:06 compute-0 podman[265077]: 2025-11-25 09:57:06.902972458 +0000 UTC m=+0.090733499 container start 315014b24243bb07dc1accdfe1c2b53d3c8521976e002cde7a4971ab5061e0cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:57:06 compute-0 admiring_easley[265091]: 167 167
Nov 25 09:57:06 compute-0 systemd[1]: libpod-315014b24243bb07dc1accdfe1c2b53d3c8521976e002cde7a4971ab5061e0cc.scope: Deactivated successfully.
Nov 25 09:57:06 compute-0 podman[265077]: 2025-11-25 09:57:06.907075822 +0000 UTC m=+0.094836885 container attach 315014b24243bb07dc1accdfe1c2b53d3c8521976e002cde7a4971ab5061e0cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_easley, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:57:06 compute-0 podman[265077]: 2025-11-25 09:57:06.907433226 +0000 UTC m=+0.095194268 container died 315014b24243bb07dc1accdfe1c2b53d3c8521976e002cde7a4971ab5061e0cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_easley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d884b5dedb72f9e9e6cb6453382b22f09829ce1e58f80bf74996a9d3f7fb567-merged.mount: Deactivated successfully.
Nov 25 09:57:06 compute-0 podman[265077]: 2025-11-25 09:57:06.925111201 +0000 UTC m=+0.112872243 container remove 315014b24243bb07dc1accdfe1c2b53d3c8521976e002cde7a4971ab5061e0cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_easley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:57:06 compute-0 podman[265077]: 2025-11-25 09:57:06.829233384 +0000 UTC m=+0.016994446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:57:06 compute-0 systemd[1]: libpod-conmon-315014b24243bb07dc1accdfe1c2b53d3c8521976e002cde7a4971ab5061e0cc.scope: Deactivated successfully.
Nov 25 09:57:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:07.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:07.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:07.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:07.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:07 compute-0 podman[265113]: 2025-11-25 09:57:07.067918673 +0000 UTC m=+0.032358519 container create b8edae54cac6ec3e17107d3ffde317ab520977aeb021f0c068fa88369f2afb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mayer, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 09:57:07 compute-0 systemd[1]: Started libpod-conmon-b8edae54cac6ec3e17107d3ffde317ab520977aeb021f0c068fa88369f2afb4a.scope.
Nov 25 09:57:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe45f027caddd008a228425810b8b5732e176cb95f22afb90a298da4902a031/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe45f027caddd008a228425810b8b5732e176cb95f22afb90a298da4902a031/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe45f027caddd008a228425810b8b5732e176cb95f22afb90a298da4902a031/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe45f027caddd008a228425810b8b5732e176cb95f22afb90a298da4902a031/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:57:07 compute-0 podman[265113]: 2025-11-25 09:57:07.135387766 +0000 UTC m=+0.099827603 container init b8edae54cac6ec3e17107d3ffde317ab520977aeb021f0c068fa88369f2afb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mayer, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:57:07 compute-0 podman[265113]: 2025-11-25 09:57:07.140249381 +0000 UTC m=+0.104689216 container start b8edae54cac6ec3e17107d3ffde317ab520977aeb021f0c068fa88369f2afb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 09:57:07 compute-0 podman[265113]: 2025-11-25 09:57:07.141377156 +0000 UTC m=+0.105817002 container attach b8edae54cac6ec3e17107d3ffde317ab520977aeb021f0c068fa88369f2afb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mayer, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:57:07 compute-0 podman[265113]: 2025-11-25 09:57:07.055973735 +0000 UTC m=+0.020413591 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:57:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:07 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v777: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 09:57:07 compute-0 lvm[265202]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:57:07 compute-0 lvm[265202]: VG ceph_vg0 finished
Nov 25 09:57:07 compute-0 affectionate_mayer[265126]: {}
Nov 25 09:57:07 compute-0 systemd[1]: libpod-b8edae54cac6ec3e17107d3ffde317ab520977aeb021f0c068fa88369f2afb4a.scope: Deactivated successfully.
Nov 25 09:57:07 compute-0 podman[265113]: 2025-11-25 09:57:07.671933201 +0000 UTC m=+0.636373047 container died b8edae54cac6ec3e17107d3ffde317ab520977aeb021f0c068fa88369f2afb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 09:57:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-afe45f027caddd008a228425810b8b5732e176cb95f22afb90a298da4902a031-merged.mount: Deactivated successfully.
Nov 25 09:57:07 compute-0 podman[265113]: 2025-11-25 09:57:07.698762105 +0000 UTC m=+0.663201942 container remove b8edae54cac6ec3e17107d3ffde317ab520977aeb021f0c068fa88369f2afb4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mayer, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 25 09:57:07 compute-0 nova_compute[253512]: 2025-11-25 09:57:07.701 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:07 compute-0 systemd[1]: libpod-conmon-b8edae54cac6ec3e17107d3ffde317ab520977aeb021f0c068fa88369f2afb4a.scope: Deactivated successfully.
Nov 25 09:57:07 compute-0 sudo[265020]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:57:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:57:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:07 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040059b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:07 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:07 compute-0 sudo[265214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:57:07 compute-0 sudo[265214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:07 compute-0 sudo[265214]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:08.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:08.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095708 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:57:08 compute-0 ceph-mon[74207]: pgmap v777: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 09:57:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:08 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:57:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:08 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040059b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v778: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 244 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Nov 25 09:57:09 compute-0 nova_compute[253512]: 2025-11-25 09:57:09.711 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:10.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:57:10] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Nov 25 09:57:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:57:10] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Nov 25 09:57:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:10.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:10 compute-0 ceph-mon[74207]: pgmap v778: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 244 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Nov 25 09:57:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:10 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040059b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:11 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v779: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 25 09:57:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:11 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f400a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:12.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:12.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:12 compute-0 nova_compute[253512]: 2025-11-25 09:57:12.703 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:12 compute-0 ceph-mon[74207]: pgmap v779: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 25 09:57:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:12 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:13 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v780: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 09:57:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:13 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100bf3c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:14.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:14.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:14 compute-0 nova_compute[253512]: 2025-11-25 09:57:14.713 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:14 compute-0 ceph-mon[74207]: pgmap v780: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 09:57:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:14 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:57:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:57:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:57:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:57:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:57:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:57:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:57:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:57:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:15 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v781: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 09:57:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:15 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:57:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:16.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:16 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 09:57:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:16.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:16 compute-0 ceph-mon[74207]: pgmap v781: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 09:57:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:16 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:17.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:17.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:17.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:17.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:17 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v782: 337 pgs: 337 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 25 09:57:17 compute-0 nova_compute[253512]: 2025-11-25 09:57:17.705 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:17 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:18.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:18.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:18 compute-0 ceph-mon[74207]: pgmap v782: 337 pgs: 337 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 25 09:57:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:18 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:19 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:19 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 09:57:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:19 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:57:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:19 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 09:57:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v783: 337 pgs: 337 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Nov 25 09:57:19 compute-0 nova_compute[253512]: 2025-11-25 09:57:19.715 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:19 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3818507626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:20.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:57:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 25 09:57:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:57:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 25 09:57:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:20.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:20 compute-0 ceph-mon[74207]: pgmap v783: 337 pgs: 337 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Nov 25 09:57:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2969080741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:20 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:21 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:21 compute-0 nova_compute[253512]: 2025-11-25 09:57:21.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v784: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Nov 25 09:57:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:21 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100bff00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2898854963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:22.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:22 compute-0 sudo[265255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:57:22 compute-0 sudo[265255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:22 compute-0 sudo[265255]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:22 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 09:57:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:22.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:22 compute-0 podman[265279]: 2025-11-25 09:57:22.386473998 +0000 UTC m=+0.063843361 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:57:22 compute-0 nova_compute[253512]: 2025-11-25 09:57:22.467 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:22 compute-0 nova_compute[253512]: 2025-11-25 09:57:22.470 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:22 compute-0 nova_compute[253512]: 2025-11-25 09:57:22.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:22 compute-0 nova_compute[253512]: 2025-11-25 09:57:22.487 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:57:22 compute-0 nova_compute[253512]: 2025-11-25 09:57:22.487 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:57:22 compute-0 nova_compute[253512]: 2025-11-25 09:57:22.487 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:57:22 compute-0 nova_compute[253512]: 2025-11-25 09:57:22.488 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:57:22 compute-0 nova_compute[253512]: 2025-11-25 09:57:22.488 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:57:22 compute-0 nova_compute[253512]: 2025-11-25 09:57:22.710 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:22 compute-0 ceph-mon[74207]: pgmap v784: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Nov 25 09:57:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/881388465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:57:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3899372740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:22 compute-0 nova_compute[253512]: 2025-11-25 09:57:22.834 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:57:22 compute-0 nova_compute[253512]: 2025-11-25 09:57:22.876 253516 DEBUG nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:57:22 compute-0 nova_compute[253512]: 2025-11-25 09:57:22.876 253516 DEBUG nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 09:57:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:22 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.087 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.088 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4382MB free_disk=59.8970832824707GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.088 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.089 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.142 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Instance e414c01f-d327-411b-9309-c4c4dabd5b4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.142 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.143 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.177 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:57:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:23 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec007d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:57:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790036785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v785: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.515 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.518 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.528 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.540 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:57:23 compute-0 nova_compute[253512]: 2025-11-25 09:57:23.540 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:57:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:23 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3899372740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2790036785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:57:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:24.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:24.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:24 compute-0 nova_compute[253512]: 2025-11-25 09:57:24.541 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:24 compute-0 nova_compute[253512]: 2025-11-25 09:57:24.541 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:24 compute-0 nova_compute[253512]: 2025-11-25 09:57:24.566 253516 DEBUG nova.compute.manager [req-24d54b37-78c2-4fb5-a60e-433d79a3218a req-b472bd1b-9f80-481d-b5f7-33e3422f707f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-changed-b3599bd2-09f9-4143-abc8-745915f961e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:57:24 compute-0 nova_compute[253512]: 2025-11-25 09:57:24.566 253516 DEBUG nova.compute.manager [req-24d54b37-78c2-4fb5-a60e-433d79a3218a req-b472bd1b-9f80-481d-b5f7-33e3422f707f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Refreshing instance network info cache due to event network-changed-b3599bd2-09f9-4143-abc8-745915f961e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:57:24 compute-0 nova_compute[253512]: 2025-11-25 09:57:24.567 253516 DEBUG oslo_concurrency.lockutils [req-24d54b37-78c2-4fb5-a60e-433d79a3218a req-b472bd1b-9f80-481d-b5f7-33e3422f707f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:57:24 compute-0 nova_compute[253512]: 2025-11-25 09:57:24.567 253516 DEBUG oslo_concurrency.lockutils [req-24d54b37-78c2-4fb5-a60e-433d79a3218a req-b472bd1b-9f80-481d-b5f7-33e3422f707f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:57:24 compute-0 nova_compute[253512]: 2025-11-25 09:57:24.567 253516 DEBUG nova.network.neutron [req-24d54b37-78c2-4fb5-a60e-433d79a3218a req-b472bd1b-9f80-481d-b5f7-33e3422f707f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Refreshing network info cache for port b3599bd2-09f9-4143-abc8-745915f961e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:57:24 compute-0 nova_compute[253512]: 2025-11-25 09:57:24.716 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:24 compute-0 ceph-mon[74207]: pgmap v785: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 09:57:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:24 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c1370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:25 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v786: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 09:57:25 compute-0 nova_compute[253512]: 2025-11-25 09:57:25.633 253516 DEBUG nova.network.neutron [req-24d54b37-78c2-4fb5-a60e-433d79a3218a req-b472bd1b-9f80-481d-b5f7-33e3422f707f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updated VIF entry in instance network info cache for port b3599bd2-09f9-4143-abc8-745915f961e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:57:25 compute-0 nova_compute[253512]: 2025-11-25 09:57:25.634 253516 DEBUG nova.network.neutron [req-24d54b37-78c2-4fb5-a60e-433d79a3218a req-b472bd1b-9f80-481d-b5f7-33e3422f707f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updating instance_info_cache with network_info: [{"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3599bd2-09f9-4143-abc8-745915f961e3", "address": "fa:16:3e:40:3a:8c", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3599bd2-09", "ovs_interfaceid": "b3599bd2-09f9-4143-abc8-745915f961e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:57:25 compute-0 nova_compute[253512]: 2025-11-25 09:57:25.649 253516 DEBUG oslo_concurrency.lockutils [req-24d54b37-78c2-4fb5-a60e-433d79a3218a req-b472bd1b-9f80-481d-b5f7-33e3422f707f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:57:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:25 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:26.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:26.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:26 compute-0 nova_compute[253512]: 2025-11-25 09:57:26.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:26 compute-0 nova_compute[253512]: 2025-11-25 09:57:26.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:57:26 compute-0 nova_compute[253512]: 2025-11-25 09:57:26.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:57:26 compute-0 nova_compute[253512]: 2025-11-25 09:57:26.617 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:57:26 compute-0 nova_compute[253512]: 2025-11-25 09:57:26.618 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquired lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:57:26 compute-0 nova_compute[253512]: 2025-11-25 09:57:26.618 253516 DEBUG nova.network.neutron [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 09:57:26 compute-0 nova_compute[253512]: 2025-11-25 09:57:26.618 253516 DEBUG nova.objects.instance [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e414c01f-d327-411b-9309-c4c4dabd5b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:57:26 compute-0 ceph-mon[74207]: pgmap v786: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 09:57:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:26 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c0023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:27.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:27.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:27.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:27.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:27 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c1370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v787: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Nov 25 09:57:27 compute-0 nova_compute[253512]: 2025-11-25 09:57:27.712 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:27 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb718004a10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.956075) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647956095, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1484, "num_deletes": 250, "total_data_size": 2674083, "memory_usage": 2709912, "flush_reason": "Manual Compaction"}
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647960036, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1602202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22129, "largest_seqno": 23612, "table_properties": {"data_size": 1597003, "index_size": 2403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13718, "raw_average_key_size": 20, "raw_value_size": 1585615, "raw_average_value_size": 2373, "num_data_blocks": 105, "num_entries": 668, "num_filter_entries": 668, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064510, "oldest_key_time": 1764064510, "file_creation_time": 1764064647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 3986 microseconds, and 2867 cpu microseconds.
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.960062) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1602202 bytes OK
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.960072) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.960652) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.960662) EVENT_LOG_v1 {"time_micros": 1764064647960659, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.960671) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2667741, prev total WAL file size 2667741, number of live WAL files 2.
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.962008) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1564KB)], [47(13MB)]
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647962037, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16006324, "oldest_snapshot_seqno": -1}
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5672 keys, 12928830 bytes, temperature: kUnknown
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647990331, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12928830, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12891742, "index_size": 21810, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 142282, "raw_average_key_size": 25, "raw_value_size": 12790212, "raw_average_value_size": 2254, "num_data_blocks": 895, "num_entries": 5672, "num_filter_entries": 5672, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764064647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.990462) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12928830 bytes
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.990877) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 565.0 rd, 456.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.7 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(18.1) write-amplify(8.1) OK, records in: 6125, records dropped: 453 output_compression: NoCompression
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.990914) EVENT_LOG_v1 {"time_micros": 1764064647990884, "job": 24, "event": "compaction_finished", "compaction_time_micros": 28328, "compaction_time_cpu_micros": 21365, "output_level": 6, "num_output_files": 1, "total_output_size": 12928830, "num_input_records": 6125, "num_output_records": 5672, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647991140, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647993001, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.961975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.993111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.993115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.993117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.993118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:27 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:27.993119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:28.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:28 compute-0 nova_compute[253512]: 2025-11-25 09:57:28.273 253516 DEBUG nova.network.neutron [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updating instance_info_cache with network_info: [{"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3599bd2-09f9-4143-abc8-745915f961e3", "address": "fa:16:3e:40:3a:8c", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3599bd2-09", "ovs_interfaceid": "b3599bd2-09f9-4143-abc8-745915f961e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:57:28 compute-0 nova_compute[253512]: 2025-11-25 09:57:28.291 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Releasing lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:57:28 compute-0 nova_compute[253512]: 2025-11-25 09:57:28.291 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 09:57:28 compute-0 nova_compute[253512]: 2025-11-25 09:57:28.292 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:28 compute-0 nova_compute[253512]: 2025-11-25 09:57:28.292 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:57:28 compute-0 nova_compute[253512]: 2025-11-25 09:57:28.292 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:57:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:28.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095728 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 09:57:28 compute-0 ceph-mon[74207]: pgmap v787: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Nov 25 09:57:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:28 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:29 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v788: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 26 KiB/s wr, 5 op/s
Nov 25 09:57:29 compute-0 nova_compute[253512]: 2025-11-25 09:57:29.718 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:29 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c2080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:57:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:57:30 compute-0 podman[265351]: 2025-11-25 09:57:30.000552609 +0000 UTC m=+0.059846146 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:57:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:30.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:57:30] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Nov 25 09:57:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:57:30] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Nov 25 09:57:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:30.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:30 compute-0 ceph-mon[74207]: pgmap v788: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 26 KiB/s wr, 5 op/s
Nov 25 09:57:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:57:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:30 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb718005330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:31 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v789: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 27 KiB/s wr, 6 op/s
Nov 25 09:57:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:31 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:32.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:32.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:32 compute-0 nova_compute[253512]: 2025-11-25 09:57:32.714 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:32 compute-0 ceph-mon[74207]: pgmap v789: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 27 KiB/s wr, 6 op/s
Nov 25 09:57:32 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:32 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:33 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v790: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 2 op/s
Nov 25 09:57:33 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:33 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:34.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:34.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:34 compute-0 nova_compute[253512]: 2025-11-25 09:57:34.720 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:34 compute-0 ceph-mon[74207]: pgmap v790: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 2 op/s
Nov 25 09:57:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:34 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c0040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:35 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v791: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 2 op/s
Nov 25 09:57:35 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:35 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c2080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:35 compute-0 podman[265380]: 2025-11-25 09:57:35.981574595 +0000 UTC m=+0.042833442 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:57:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:36.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:36.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:36 compute-0 ceph-mon[74207]: pgmap v791: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 2 op/s
Nov 25 09:57:36 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:36 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:37.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:37.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:37.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:37.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:37 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c0040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v792: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 16 KiB/s wr, 3 op/s
Nov 25 09:57:37 compute-0 nova_compute[253512]: 2025-11-25 09:57:37.716 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:37 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:38.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:38.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:38 compute-0 ceph-mon[74207]: pgmap v792: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 16 KiB/s wr, 3 op/s
Nov 25 09:57:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:38 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c2080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:39 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v793: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.1 KiB/s wr, 0 op/s
Nov 25 09:57:39 compute-0 nova_compute[253512]: 2025-11-25 09:57:39.721 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:39 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:39 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:40.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:57:40] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Nov 25 09:57:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:57:40] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Nov 25 09:57:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:40.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:40 compute-0 ceph-mon[74207]: pgmap v793: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.1 KiB/s wr, 0 op/s
Nov 25 09:57:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:40 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:41 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c2080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v794: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 6.4 KiB/s wr, 1 op/s
Nov 25 09:57:41 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:41 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c2080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:42.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:42 compute-0 sudo[265404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:57:42 compute-0 sudo[265404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:57:42 compute-0 sudo[265404]: pam_unix(sudo:session): session closed for user root
Nov 25 09:57:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:42.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:42 compute-0 nova_compute[253512]: 2025-11-25 09:57:42.718 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:42 compute-0 ceph-mon[74207]: pgmap v794: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 6.4 KiB/s wr, 1 op/s
Nov 25 09:57:42 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:42 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c004dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:43 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v795: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 5.3 KiB/s wr, 1 op/s
Nov 25 09:57:43 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:43 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c2080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:44.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:44.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:44 compute-0 nova_compute[253512]: 2025-11-25 09:57:44.723 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:44 compute-0 ceph-mon[74207]: pgmap v795: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 5.3 KiB/s wr, 1 op/s
Nov 25 09:57:44 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:44 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:57:44
Nov 25 09:57:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:57:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:57:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['.nfs', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'vms', 'backups', 'images', '.mgr', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data']
Nov 25 09:57:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:57:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:57:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:57:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:57:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:57:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:57:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:57:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:57:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:57:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:57:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:57:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:57:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:57:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:57:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:57:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:57:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:57:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:57:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:57:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:45 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v796: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 5.3 KiB/s wr, 1 op/s
Nov 25 09:57:45 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:45 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:57:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:46.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:46.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:46 compute-0 ceph-mon[74207]: pgmap v796: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 5.3 KiB/s wr, 1 op/s
Nov 25 09:57:46 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:46 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c2080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:47.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:47.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:47.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:47.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:47 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c2080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v797: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 5.7 KiB/s wr, 1 op/s
Nov 25 09:57:47 compute-0 nova_compute[253512]: 2025-11-25 09:57:47.719 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:47 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:48.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:48.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:48 compute-0 ceph-mon[74207]: pgmap v797: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 5.7 KiB/s wr, 1 op/s
Nov 25 09:57:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:48 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c005ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:49 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v798: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 2.7 KiB/s wr, 0 op/s
Nov 25 09:57:49 compute-0 nova_compute[253512]: 2025-11-25 09:57:49.725 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:49 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:49 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:49 compute-0 ceph-mon[74207]: pgmap v798: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 2.7 KiB/s wr, 0 op/s
Nov 25 09:57:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:50.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:57:50] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 25 09:57:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:57:50] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 25 09:57:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:50.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:50 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:51 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c005ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v799: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 6.3 KiB/s wr, 1 op/s
Nov 25 09:57:51 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:51 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:52.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:52.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:52 compute-0 ceph-mon[74207]: pgmap v799: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 6.3 KiB/s wr, 1 op/s
Nov 25 09:57:52 compute-0 nova_compute[253512]: 2025-11-25 09:57:52.721 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:52 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:52 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c2080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.962602) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672962628, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 450, "num_deletes": 257, "total_data_size": 426662, "memory_usage": 435528, "flush_reason": "Manual Compaction"}
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672964475, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 422803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23613, "largest_seqno": 24062, "table_properties": {"data_size": 420215, "index_size": 624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 5728, "raw_average_key_size": 17, "raw_value_size": 415138, "raw_average_value_size": 1246, "num_data_blocks": 28, "num_entries": 333, "num_filter_entries": 333, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064648, "oldest_key_time": 1764064648, "file_creation_time": 1764064672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 1893 microseconds, and 1360 cpu microseconds.
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.964497) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 422803 bytes OK
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.964508) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.964883) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.964907) EVENT_LOG_v1 {"time_micros": 1764064672964889, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.964918) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 423963, prev total WAL file size 423963, number of live WAL files 2.
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965477) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(412KB)], [50(12MB)]
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672965669, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13351633, "oldest_snapshot_seqno": -1}
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5483 keys, 13189047 bytes, temperature: kUnknown
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672989998, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13189047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13152486, "index_size": 21731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 139519, "raw_average_key_size": 25, "raw_value_size": 13053517, "raw_average_value_size": 2380, "num_data_blocks": 886, "num_entries": 5483, "num_filter_entries": 5483, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764064672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.990158) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13189047 bytes
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.994569) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 549.1 rd, 542.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 12.3 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(62.8) write-amplify(31.2) OK, records in: 6005, records dropped: 522 output_compression: NoCompression
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.994593) EVENT_LOG_v1 {"time_micros": 1764064672994586, "job": 26, "event": "compaction_finished", "compaction_time_micros": 24315, "compaction_time_cpu_micros": 19585, "output_level": 6, "num_output_files": 1, "total_output_size": 13189047, "num_input_records": 6005, "num_output_records": 5483, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672994723, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672996208, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.996237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.996240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.996241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.996242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:52 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:57:52.996243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:57:53 compute-0 podman[265440]: 2025-11-25 09:57:53.000500745 +0000 UTC m=+0.067822012 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 09:57:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:53 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c2080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v800: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 1 op/s
Nov 25 09:57:53 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:53 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c005ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 09:57:53 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/687601177' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:57:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 09:57:53 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/687601177' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:57:53 compute-0 ceph-mon[74207]: pgmap v800: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 1 op/s
Nov 25 09:57:53 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/687601177' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:57:53 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/687601177' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:57:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:54.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:54.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:54 compute-0 ovn_controller[155020]: 2025-11-25T09:57:54Z|00061|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Nov 25 09:57:54 compute-0 nova_compute[253512]: 2025-11-25 09:57:54.727 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:54 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:54 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec008290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:55 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001521852819561112 of space, bias 1.0, pg target 0.45655584586833364 quantized to 32 (current 32)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:57:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v801: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 1 op/s
Nov 25 09:57:55 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:55 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7100c2080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:56.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:57:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:56.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:57:56 compute-0 ceph-mon[74207]: pgmap v801: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 1 op/s
Nov 25 09:57:56 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:56 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c005ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:57.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:57.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:57.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:57:57.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:57:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:57 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v802: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 7.3 KiB/s wr, 1 op/s
Nov 25 09:57:57 compute-0 nova_compute[253512]: 2025-11-25 09:57:57.723 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:57 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:57:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:58.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:57:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:57:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:58.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:57:58 compute-0 ceph-mon[74207]: pgmap v802: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 7.3 KiB/s wr, 1 op/s
Nov 25 09:57:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:58 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f4001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:59 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c005ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v803: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 7.0 KiB/s wr, 1 op/s
Nov 25 09:57:59 compute-0 nova_compute[253512]: 2025-11-25 09:57:59.730 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:57:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:57:59 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:57:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:57:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:58:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:00.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:58:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 25 09:58:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:58:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 25 09:58:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:00.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:00 compute-0 ceph-mon[74207]: pgmap v803: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 7.0 KiB/s wr, 1 op/s
Nov 25 09:58:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:58:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:00 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040066e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:00 compute-0 podman[265465]: 2025-11-25 09:58:00.989083159 +0000 UTC m=+0.055260782 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 25 09:58:01 compute-0 anacron[4565]: Job `cron.weekly' started
Nov 25 09:58:01 compute-0 anacron[4565]: Job `cron.weekly' terminated
Nov 25 09:58:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:01 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f4001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v804: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 8.3 KiB/s wr, 1 op/s
Nov 25 09:58:01 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:01 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c005ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:02.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:02.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:02 compute-0 sudo[265493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:58:02 compute-0 sudo[265493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:02 compute-0 sudo[265493]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:02 compute-0 ceph-mon[74207]: pgmap v804: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 8.3 KiB/s wr, 1 op/s
Nov 25 09:58:02 compute-0 nova_compute[253512]: 2025-11-25 09:58:02.723 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:02 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:02 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:03 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v805: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.7 KiB/s wr, 0 op/s
Nov 25 09:58:03 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:03 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:04.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:04.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:04 compute-0 ceph-mon[74207]: pgmap v805: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.7 KiB/s wr, 0 op/s
Nov 25 09:58:04 compute-0 nova_compute[253512]: 2025-11-25 09:58:04.732 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:04 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:04 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb724004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:05 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb704006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:05.386 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:05.386 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:05.387 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v806: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.7 KiB/s wr, 0 op/s
Nov 25 09:58:05 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:05 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb704006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:06.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:06.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:06 compute-0 ceph-mon[74207]: pgmap v806: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.7 KiB/s wr, 0 op/s
Nov 25 09:58:06 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:06 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:06 compute-0 podman[265523]: 2025-11-25 09:58:06.976666352 +0000 UTC m=+0.040983755 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:58:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:07.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:07.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:07.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:07.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:07 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb724005260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v807: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 15 KiB/s wr, 2 op/s
Nov 25 09:58:07 compute-0 nova_compute[253512]: 2025-11-25 09:58:07.724 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:07 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c005ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:07 compute-0 sudo[265542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:58:08 compute-0 sudo[265542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:08 compute-0 sudo[265542]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:08 compute-0 sudo[265567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:58:08 compute-0 sudo[265567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:08.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:08.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:08 compute-0 sudo[265567]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:08 compute-0 ceph-mon[74207]: pgmap v807: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 15 KiB/s wr, 2 op/s
Nov 25 09:58:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:08 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb704006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb704006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v808: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 12 KiB/s wr, 2 op/s
Nov 25 09:58:09 compute-0 nova_compute[253512]: 2025-11-25 09:58:09.732 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:09 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:09 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb724005260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 09:58:09 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 09:58:09 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:10.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:58:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 25 09:58:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:58:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 25 09:58:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:58:10 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:58:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:58:10 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:58:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:58:10 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:58:10 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:58:10 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:58:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:58:10 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:58:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:58:10 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:58:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:10.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:10 compute-0 sudo[265624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:58:10 compute-0 sudo[265624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:10 compute-0 sudo[265624]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:10 compute-0 sudo[265649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:58:10 compute-0 sudo[265649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:10 compute-0 podman[265706]: 2025-11-25 09:58:10.753420388 +0000 UTC m=+0.024811998 container create ab30875d280a410eb0cc39ac64da846063a9d2d908cbde603fe1410c189e35fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 09:58:10 compute-0 systemd[1]: Started libpod-conmon-ab30875d280a410eb0cc39ac64da846063a9d2d908cbde603fe1410c189e35fc.scope.
Nov 25 09:58:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:58:10 compute-0 podman[265706]: 2025-11-25 09:58:10.807121117 +0000 UTC m=+0.078512748 container init ab30875d280a410eb0cc39ac64da846063a9d2d908cbde603fe1410c189e35fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:58:10 compute-0 podman[265706]: 2025-11-25 09:58:10.811289075 +0000 UTC m=+0.082680685 container start ab30875d280a410eb0cc39ac64da846063a9d2d908cbde603fe1410c189e35fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 09:58:10 compute-0 podman[265706]: 2025-11-25 09:58:10.812406582 +0000 UTC m=+0.083798192 container attach ab30875d280a410eb0cc39ac64da846063a9d2d908cbde603fe1410c189e35fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 25 09:58:10 compute-0 sad_curran[265720]: 167 167
Nov 25 09:58:10 compute-0 systemd[1]: libpod-ab30875d280a410eb0cc39ac64da846063a9d2d908cbde603fe1410c189e35fc.scope: Deactivated successfully.
Nov 25 09:58:10 compute-0 ceph-mon[74207]: pgmap v808: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 12 KiB/s wr, 2 op/s
Nov 25 09:58:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:58:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:58:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:58:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:58:10 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:58:10 compute-0 podman[265706]: 2025-11-25 09:58:10.743218616 +0000 UTC m=+0.014610236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:58:10 compute-0 podman[265725]: 2025-11-25 09:58:10.848145578 +0000 UTC m=+0.017631870 container died ab30875d280a410eb0cc39ac64da846063a9d2d908cbde603fe1410c189e35fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0506447e440f9c28e55d24bcd716e5acef9234ea6d1f002d547d5b4b5ea524e4-merged.mount: Deactivated successfully.
Nov 25 09:58:10 compute-0 podman[265725]: 2025-11-25 09:58:10.8658313 +0000 UTC m=+0.035317582 container remove ab30875d280a410eb0cc39ac64da846063a9d2d908cbde603fe1410c189e35fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:58:10 compute-0 systemd[1]: libpod-conmon-ab30875d280a410eb0cc39ac64da846063a9d2d908cbde603fe1410c189e35fc.scope: Deactivated successfully.
Nov 25 09:58:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:10 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c005ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:10 compute-0 podman[265743]: 2025-11-25 09:58:10.99390533 +0000 UTC m=+0.027313493 container create 8276284d6c2df6ad8d9b53cf4a14a65413ee80efd57309560d2f98440780bf32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:58:11 compute-0 systemd[1]: Started libpod-conmon-8276284d6c2df6ad8d9b53cf4a14a65413ee80efd57309560d2f98440780bf32.scope.
Nov 25 09:58:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036574b4887341273740e251dad37ae2435727df5c23f809a5d5a4d1058e20ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036574b4887341273740e251dad37ae2435727df5c23f809a5d5a4d1058e20ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036574b4887341273740e251dad37ae2435727df5c23f809a5d5a4d1058e20ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036574b4887341273740e251dad37ae2435727df5c23f809a5d5a4d1058e20ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036574b4887341273740e251dad37ae2435727df5c23f809a5d5a4d1058e20ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:11 compute-0 podman[265743]: 2025-11-25 09:58:11.053503558 +0000 UTC m=+0.086911741 container init 8276284d6c2df6ad8d9b53cf4a14a65413ee80efd57309560d2f98440780bf32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kilby, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 09:58:11 compute-0 podman[265743]: 2025-11-25 09:58:11.058041333 +0000 UTC m=+0.091449495 container start 8276284d6c2df6ad8d9b53cf4a14a65413ee80efd57309560d2f98440780bf32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:58:11 compute-0 podman[265743]: 2025-11-25 09:58:11.059043321 +0000 UTC m=+0.092451504 container attach 8276284d6c2df6ad8d9b53cf4a14a65413ee80efd57309560d2f98440780bf32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kilby, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:58:11 compute-0 podman[265743]: 2025-11-25 09:58:10.98253227 +0000 UTC m=+0.015940453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:58:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:11 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:11 compute-0 awesome_kilby[265757]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:58:11 compute-0 awesome_kilby[265757]: --> All data devices are unavailable
Nov 25 09:58:11 compute-0 systemd[1]: libpod-8276284d6c2df6ad8d9b53cf4a14a65413ee80efd57309560d2f98440780bf32.scope: Deactivated successfully.
Nov 25 09:58:11 compute-0 podman[265743]: 2025-11-25 09:58:11.319808767 +0000 UTC m=+0.353216930 container died 8276284d6c2df6ad8d9b53cf4a14a65413ee80efd57309560d2f98440780bf32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-036574b4887341273740e251dad37ae2435727df5c23f809a5d5a4d1058e20ee-merged.mount: Deactivated successfully.
Nov 25 09:58:11 compute-0 podman[265743]: 2025-11-25 09:58:11.341425951 +0000 UTC m=+0.374834114 container remove 8276284d6c2df6ad8d9b53cf4a14a65413ee80efd57309560d2f98440780bf32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:58:11 compute-0 systemd[1]: libpod-conmon-8276284d6c2df6ad8d9b53cf4a14a65413ee80efd57309560d2f98440780bf32.scope: Deactivated successfully.
Nov 25 09:58:11 compute-0 sudo[265649]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:11 compute-0 sudo[265783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:58:11 compute-0 sudo[265783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:11 compute-0 sudo[265783]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:11 compute-0 sudo[265808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:58:11 compute-0 sudo[265808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v809: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 2 op/s
Nov 25 09:58:11 compute-0 podman[265865]: 2025-11-25 09:58:11.738428547 +0000 UTC m=+0.026106909 container create 39694fcc5ab73f6bf464713a45e130d197332f9cdf86f341567cb606a4b28565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_beaver, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:58:11 compute-0 systemd[1]: Started libpod-conmon-39694fcc5ab73f6bf464713a45e130d197332f9cdf86f341567cb606a4b28565.scope.
Nov 25 09:58:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:11 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb704006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:58:11 compute-0 podman[265865]: 2025-11-25 09:58:11.792814698 +0000 UTC m=+0.080493061 container init 39694fcc5ab73f6bf464713a45e130d197332f9cdf86f341567cb606a4b28565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_beaver, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 09:58:11 compute-0 podman[265865]: 2025-11-25 09:58:11.796818056 +0000 UTC m=+0.084496418 container start 39694fcc5ab73f6bf464713a45e130d197332f9cdf86f341567cb606a4b28565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_beaver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Nov 25 09:58:11 compute-0 podman[265865]: 2025-11-25 09:58:11.798191595 +0000 UTC m=+0.085869958 container attach 39694fcc5ab73f6bf464713a45e130d197332f9cdf86f341567cb606a4b28565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_beaver, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:58:11 compute-0 nifty_beaver[265877]: 167 167
Nov 25 09:58:11 compute-0 systemd[1]: libpod-39694fcc5ab73f6bf464713a45e130d197332f9cdf86f341567cb606a4b28565.scope: Deactivated successfully.
Nov 25 09:58:11 compute-0 podman[265865]: 2025-11-25 09:58:11.800141972 +0000 UTC m=+0.087820324 container died 39694fcc5ab73f6bf464713a45e130d197332f9cdf86f341567cb606a4b28565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_beaver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 25 09:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f71ffa5ed317bd5625d32d4a248e57a40047e51a0646748cf4e142d58cef4773-merged.mount: Deactivated successfully.
Nov 25 09:58:11 compute-0 podman[265865]: 2025-11-25 09:58:11.816974065 +0000 UTC m=+0.104652426 container remove 39694fcc5ab73f6bf464713a45e130d197332f9cdf86f341567cb606a4b28565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 09:58:11 compute-0 podman[265865]: 2025-11-25 09:58:11.72852682 +0000 UTC m=+0.016205202 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:58:11 compute-0 systemd[1]: libpod-conmon-39694fcc5ab73f6bf464713a45e130d197332f9cdf86f341567cb606a4b28565.scope: Deactivated successfully.
Nov 25 09:58:11 compute-0 podman[265901]: 2025-11-25 09:58:11.946837409 +0000 UTC m=+0.026074538 container create 59407fc4bb14b47d2d4e439c827334ac2bf09761666adb031217a09453745b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_yonath, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 25 09:58:11 compute-0 systemd[1]: Started libpod-conmon-59407fc4bb14b47d2d4e439c827334ac2bf09761666adb031217a09453745b9c.scope.
Nov 25 09:58:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea45a4ee2998695eb5b029a1e1ec362efcc95225b8ef89180d193f46e9c1c52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea45a4ee2998695eb5b029a1e1ec362efcc95225b8ef89180d193f46e9c1c52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea45a4ee2998695eb5b029a1e1ec362efcc95225b8ef89180d193f46e9c1c52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea45a4ee2998695eb5b029a1e1ec362efcc95225b8ef89180d193f46e9c1c52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:12 compute-0 podman[265901]: 2025-11-25 09:58:12.005830444 +0000 UTC m=+0.085067583 container init 59407fc4bb14b47d2d4e439c827334ac2bf09761666adb031217a09453745b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_yonath, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:58:12 compute-0 podman[265901]: 2025-11-25 09:58:12.01093041 +0000 UTC m=+0.090167529 container start 59407fc4bb14b47d2d4e439c827334ac2bf09761666adb031217a09453745b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_yonath, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Nov 25 09:58:12 compute-0 podman[265901]: 2025-11-25 09:58:12.012135892 +0000 UTC m=+0.091373011 container attach 59407fc4bb14b47d2d4e439c827334ac2bf09761666adb031217a09453745b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:58:12 compute-0 podman[265901]: 2025-11-25 09:58:11.936532742 +0000 UTC m=+0.015769881 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:58:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:12.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]: {
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:     "1": [
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:         {
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             "devices": [
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "/dev/loop3"
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             ],
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             "lv_name": "ceph_lv0",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             "lv_size": "21470642176",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             "name": "ceph_lv0",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             "tags": {
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.cluster_name": "ceph",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.crush_device_class": "",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.encrypted": "0",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.osd_id": "1",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.type": "block",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.vdo": "0",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:                 "ceph.with_tpm": "0"
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             },
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             "type": "block",
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:             "vg_name": "ceph_vg0"
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:         }
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]:     ]
Nov 25 09:58:12 compute-0 optimistic_yonath[265914]: }
Nov 25 09:58:12 compute-0 systemd[1]: libpod-59407fc4bb14b47d2d4e439c827334ac2bf09761666adb031217a09453745b9c.scope: Deactivated successfully.
Nov 25 09:58:12 compute-0 podman[265901]: 2025-11-25 09:58:12.238966673 +0000 UTC m=+0.318203782 container died 59407fc4bb14b47d2d4e439c827334ac2bf09761666adb031217a09453745b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_yonath, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 25 09:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ea45a4ee2998695eb5b029a1e1ec362efcc95225b8ef89180d193f46e9c1c52-merged.mount: Deactivated successfully.
Nov 25 09:58:12 compute-0 podman[265901]: 2025-11-25 09:58:12.262416402 +0000 UTC m=+0.341653521 container remove 59407fc4bb14b47d2d4e439c827334ac2bf09761666adb031217a09453745b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_yonath, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 25 09:58:12 compute-0 systemd[1]: libpod-conmon-59407fc4bb14b47d2d4e439c827334ac2bf09761666adb031217a09453745b9c.scope: Deactivated successfully.
Nov 25 09:58:12 compute-0 sudo[265808]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:12 compute-0 sudo[265933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:58:12 compute-0 sudo[265933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:12 compute-0 sudo[265933]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:12 compute-0 sudo[265958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:58:12 compute-0 sudo[265958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:12.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:12 compute-0 podman[266014]: 2025-11-25 09:58:12.662805351 +0000 UTC m=+0.025056218 container create 2aa077a383ae02d5162e6ba9191a3a7c4d5e79d96afc93e311019b38bcc37139 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_neumann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 25 09:58:12 compute-0 systemd[1]: Started libpod-conmon-2aa077a383ae02d5162e6ba9191a3a7c4d5e79d96afc93e311019b38bcc37139.scope.
Nov 25 09:58:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:58:12 compute-0 podman[266014]: 2025-11-25 09:58:12.713685652 +0000 UTC m=+0.075936510 container init 2aa077a383ae02d5162e6ba9191a3a7c4d5e79d96afc93e311019b38bcc37139 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:58:12 compute-0 podman[266014]: 2025-11-25 09:58:12.718355958 +0000 UTC m=+0.080606815 container start 2aa077a383ae02d5162e6ba9191a3a7c4d5e79d96afc93e311019b38bcc37139 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_neumann, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:58:12 compute-0 podman[266014]: 2025-11-25 09:58:12.719703008 +0000 UTC m=+0.081953864 container attach 2aa077a383ae02d5162e6ba9191a3a7c4d5e79d96afc93e311019b38bcc37139 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:58:12 compute-0 kind_neumann[266028]: 167 167
Nov 25 09:58:12 compute-0 systemd[1]: libpod-2aa077a383ae02d5162e6ba9191a3a7c4d5e79d96afc93e311019b38bcc37139.scope: Deactivated successfully.
Nov 25 09:58:12 compute-0 podman[266014]: 2025-11-25 09:58:12.721884139 +0000 UTC m=+0.084134997 container died 2aa077a383ae02d5162e6ba9191a3a7c4d5e79d96afc93e311019b38bcc37139 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_neumann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:58:12 compute-0 nova_compute[253512]: 2025-11-25 09:58:12.726 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-eca3cc2260f10cb0ef6c998ecc784a72f2986e043989e7bfb3481cc12629314f-merged.mount: Deactivated successfully.
Nov 25 09:58:12 compute-0 podman[266014]: 2025-11-25 09:58:12.740461822 +0000 UTC m=+0.102712679 container remove 2aa077a383ae02d5162e6ba9191a3a7c4d5e79d96afc93e311019b38bcc37139 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_neumann, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:58:12 compute-0 podman[266014]: 2025-11-25 09:58:12.652483974 +0000 UTC m=+0.014734841 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:58:12 compute-0 systemd[1]: libpod-conmon-2aa077a383ae02d5162e6ba9191a3a7c4d5e79d96afc93e311019b38bcc37139.scope: Deactivated successfully.
Nov 25 09:58:12 compute-0 ceph-mon[74207]: pgmap v809: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 2 op/s
Nov 25 09:58:12 compute-0 podman[266050]: 2025-11-25 09:58:12.868971154 +0000 UTC m=+0.027996711 container create c58096d95391b8d8c1559c9e753dbf698760f295e53083e7143cdda67fae111b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gauss, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:58:12 compute-0 systemd[1]: Started libpod-conmon-c58096d95391b8d8c1559c9e753dbf698760f295e53083e7143cdda67fae111b.scope.
Nov 25 09:58:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:58:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d77ff37145b48309c26f5ebdb313ab32906a5dac64f0cc68516a8664acac4ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d77ff37145b48309c26f5ebdb313ab32906a5dac64f0cc68516a8664acac4ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d77ff37145b48309c26f5ebdb313ab32906a5dac64f0cc68516a8664acac4ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d77ff37145b48309c26f5ebdb313ab32906a5dac64f0cc68516a8664acac4ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:58:12 compute-0 podman[266050]: 2025-11-25 09:58:12.92200738 +0000 UTC m=+0.081032957 container init c58096d95391b8d8c1559c9e753dbf698760f295e53083e7143cdda67fae111b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 09:58:12 compute-0 podman[266050]: 2025-11-25 09:58:12.926784055 +0000 UTC m=+0.085809612 container start c58096d95391b8d8c1559c9e753dbf698760f295e53083e7143cdda67fae111b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gauss, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 25 09:58:12 compute-0 podman[266050]: 2025-11-25 09:58:12.927999427 +0000 UTC m=+0.087024984 container attach c58096d95391b8d8c1559c9e753dbf698760f295e53083e7143cdda67fae111b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:58:12 compute-0 podman[266050]: 2025-11-25 09:58:12.858087555 +0000 UTC m=+0.017113132 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:58:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:12 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb724005f70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:13 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb71c005ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:13 compute-0 lvm[266139]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:58:13 compute-0 lvm[266139]: VG ceph_vg0 finished
Nov 25 09:58:13 compute-0 angry_gauss[266064]: {}
Nov 25 09:58:13 compute-0 systemd[1]: libpod-c58096d95391b8d8c1559c9e753dbf698760f295e53083e7143cdda67fae111b.scope: Deactivated successfully.
Nov 25 09:58:13 compute-0 podman[266050]: 2025-11-25 09:58:13.43382484 +0000 UTC m=+0.592850407 container died c58096d95391b8d8c1559c9e753dbf698760f295e53083e7143cdda67fae111b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gauss, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:58:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d77ff37145b48309c26f5ebdb313ab32906a5dac64f0cc68516a8664acac4ec-merged.mount: Deactivated successfully.
Nov 25 09:58:13 compute-0 podman[266050]: 2025-11-25 09:58:13.454446337 +0000 UTC m=+0.613471893 container remove c58096d95391b8d8c1559c9e753dbf698760f295e53083e7143cdda67fae111b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 09:58:13 compute-0 systemd[1]: libpod-conmon-c58096d95391b8d8c1559c9e753dbf698760f295e53083e7143cdda67fae111b.scope: Deactivated successfully.
Nov 25 09:58:13 compute-0 sudo[265958]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:58:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:58:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v810: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 12 KiB/s wr, 2 op/s
Nov 25 09:58:13 compute-0 sudo[266152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:58:13 compute-0 sudo[266152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:13 compute-0 sudo[266152]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:13 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:13 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:14.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:14.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:58:14 compute-0 ceph-mon[74207]: pgmap v810: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 12 KiB/s wr, 2 op/s
Nov 25 09:58:14 compute-0 nova_compute[253512]: 2025-11-25 09:58:14.735 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:58:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:58:14 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:14 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb704006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:58:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:58:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:58:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:58:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:58:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:58:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:15 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb704006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:58:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v811: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 12 KiB/s wr, 2 op/s
Nov 25 09:58:15 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:15 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb704006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:16.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:16.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:16 compute-0 ceph-mon[74207]: pgmap v811: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 12 KiB/s wr, 2 op/s
Nov 25 09:58:16 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:16 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:17.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:17.058Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:17.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:17.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:17 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v812: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 13 KiB/s wr, 2 op/s
Nov 25 09:58:17 compute-0 nova_compute[253512]: 2025-11-25 09:58:17.728 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:17 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:17 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:17.867 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:58:17 compute-0 nova_compute[253512]: 2025-11-25 09:58:17.867 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:17 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:17.868 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:58:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:18.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:18.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:18 compute-0 ceph-mon[74207]: pgmap v812: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 13 KiB/s wr, 2 op/s
Nov 25 09:58:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/346978042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:18 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb728002630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.180 253516 DEBUG oslo_concurrency.lockutils [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "interface-e414c01f-d327-411b-9309-c4c4dabd5b4a-b3599bd2-09f9-4143-abc8-745915f961e3" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.180 253516 DEBUG oslo_concurrency.lockutils [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "interface-e414c01f-d327-411b-9309-c4c4dabd5b4a-b3599bd2-09f9-4143-abc8-745915f961e3" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.188 253516 DEBUG nova.objects.instance [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'flavor' on Instance uuid e414c01f-d327-411b-9309-c4c4dabd5b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.199 253516 DEBUG nova.virt.libvirt.vif [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2009582697',display_name='tempest-TestNetworkBasicOps-server-2009582697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2009582697',id=6,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD21EiYZKhXbIpyaNEUjP1ulP9c0zDwkxr0Xxe9kxy5T7Kh/aZqrRNdEYeVYyDq7wYIqSwgggji3NCoHXpcuxZfFxnprvDIJCcOEcX/dIdfv+vRs+aEB3wFMQZGt8WdE2g==',key_name='tempest-TestNetworkBasicOps-1281314821',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:56:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-36v3wqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:56:29Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=e414c01f-d327-411b-9309-c4c4dabd5b4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3599bd2-09f9-4143-abc8-745915f961e3", "address": "fa:16:3e:40:3a:8c", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3599bd2-09", "ovs_interfaceid": "b3599bd2-09f9-4143-abc8-745915f961e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.199 253516 DEBUG nova.network.os_vif_util [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "b3599bd2-09f9-4143-abc8-745915f961e3", "address": "fa:16:3e:40:3a:8c", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3599bd2-09", "ovs_interfaceid": "b3599bd2-09f9-4143-abc8-745915f961e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.199 253516 DEBUG nova.network.os_vif_util [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=b3599bd2-09f9-4143-abc8-745915f961e3,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3599bd2-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.202 253516 DEBUG nova.virt.libvirt.guest [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:3a:8c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3599bd2-09"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.204 253516 DEBUG nova.virt.libvirt.guest [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:3a:8c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3599bd2-09"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.206 253516 DEBUG nova.virt.libvirt.driver [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Attempting to detach device tapb3599bd2-09 from instance e414c01f-d327-411b-9309-c4c4dabd5b4a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.206 253516 DEBUG nova.virt.libvirt.guest [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] detach device xml: <interface type="ethernet">
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <mac address="fa:16:3e:40:3a:8c"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <model type="virtio"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <mtu size="1442"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <target dev="tapb3599bd2-09"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]: </interface>
Nov 25 09:58:19 compute-0 nova_compute[253512]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.210 253516 DEBUG nova.virt.libvirt.guest [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:3a:8c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3599bd2-09"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.212 253516 DEBUG nova.virt.libvirt.guest [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:40:3a:8c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3599bd2-09"/></interface>not found in domain: <domain type='kvm' id='3'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <name>instance-00000006</name>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <uuid>e414c01f-d327-411b-9309-c4c4dabd5b4a</uuid>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <metadata>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:name>tempest-TestNetworkBasicOps-server-2009582697</nova:name>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:creationTime>2025-11-25 09:56:52</nova:creationTime>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:flavor name="m1.nano">
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:memory>128</nova:memory>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:disk>1</nova:disk>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:swap>0</nova:swap>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:vcpus>1</nova:vcpus>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </nova:flavor>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:owner>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </nova:owner>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:ports>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:port uuid="7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82">
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </nova:port>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:port uuid="b3599bd2-09f9-4143-abc8-745915f961e3">
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </nova:port>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </nova:ports>
Nov 25 09:58:19 compute-0 nova_compute[253512]: </nova:instance>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </metadata>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <memory unit='KiB'>131072</memory>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <vcpu placement='static'>1</vcpu>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <resource>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <partition>/machine</partition>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </resource>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <sysinfo type='smbios'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <system>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <entry name='manufacturer'>RDO</entry>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <entry name='serial'>e414c01f-d327-411b-9309-c4c4dabd5b4a</entry>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <entry name='uuid'>e414c01f-d327-411b-9309-c4c4dabd5b4a</entry>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <entry name='family'>Virtual Machine</entry>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </system>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </sysinfo>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <os>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <boot dev='hd'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <smbios mode='sysinfo'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </os>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <features>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <acpi/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <apic/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <vmcoreinfo state='on'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </features>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <vendor>AMD</vendor>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='x2apic'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='hypervisor'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='vaes'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='stibp'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='ssbd'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='overflow-recov'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='succor'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='lbrv'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='pause-filter'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='v-vmsave-vmload'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='vgif'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='svm'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='topoext'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='npt'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='nrip-save'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </cpu>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <clock offset='utc'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <timer name='hpet' present='no'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </clock>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <on_poweroff>destroy</on_poweroff>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <on_reboot>restart</on_reboot>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <on_crash>destroy</on_crash>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <devices>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <disk type='network' device='disk'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <auth username='openstack'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <secret type='ceph' uuid='af1c9ae3-08d7-5547-a53d-2cccf7c6ef90'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <source protocol='rbd' name='vms/e414c01f-d327-411b-9309-c4c4dabd5b4a_disk' index='2'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <host name='192.168.122.100' port='6789'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <host name='192.168.122.102' port='6789'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <host name='192.168.122.101' port='6789'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       </source>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target dev='vda' bus='virtio'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='virtio-disk0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <disk type='network' device='cdrom'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <auth username='openstack'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <secret type='ceph' uuid='af1c9ae3-08d7-5547-a53d-2cccf7c6ef90'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <source protocol='rbd' name='vms/e414c01f-d327-411b-9309-c4c4dabd5b4a_disk.config' index='1'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <host name='192.168.122.100' port='6789'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <host name='192.168.122.102' port='6789'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <host name='192.168.122.101' port='6789'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       </source>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target dev='sda' bus='sata'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <readonly/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='sata0-0-0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pcie.0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='1' port='0x10'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='2' port='0x11'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.2'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='3' port='0x12'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.3'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='4' port='0x13'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.4'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='5' port='0x14'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.5'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='6' port='0x15'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.6'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='7' port='0x16'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.7'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='8' port='0x17'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.8'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='9' port='0x18'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.9'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='10' port='0x19'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.10'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='11' port='0x1a'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.11'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='12' port='0x1b'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.12'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='13' port='0x1c'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.13'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='14' port='0x1d'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.14'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='15' port='0x1e'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.15'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='16' port='0x1f'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.16'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='17' port='0x20'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.17'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='18' port='0x21'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.18'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='19' port='0x22'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.19'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='20' port='0x23'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.20'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='21' port='0x24'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.21'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='22' port='0x25'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.22'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='23' port='0x26'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.23'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='24' port='0x27'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.24'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='25' port='0x28'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.25'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-pci-bridge'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.26'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='usb'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='sata' index='0'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='ide'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <interface type='ethernet'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <mac address='fa:16:3e:03:f5:2a'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target dev='tap7f3b9b60-a3'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model type='virtio'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <mtu size='1442'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='net0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </interface>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <interface type='ethernet'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <mac address='fa:16:3e:40:3a:8c'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target dev='tapb3599bd2-09'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model type='virtio'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <mtu size='1442'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='net1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </interface>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <serial type='pty'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <source path='/dev/pts/0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <log file='/var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a/console.log' append='off'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target type='isa-serial' port='0'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <model name='isa-serial'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       </target>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='serial0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </serial>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <source path='/dev/pts/0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <log file='/var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a/console.log' append='off'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target type='serial' port='0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='serial0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </console>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <input type='tablet' bus='usb'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='input0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='usb' bus='0' port='1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </input>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <input type='mouse' bus='ps2'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='input1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </input>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <input type='keyboard' bus='ps2'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='input2'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </input>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <listen type='address' address='::0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </graphics>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <audio id='1' type='none'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <video>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='video0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </video>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <watchdog model='itco' action='reset'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='watchdog0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </watchdog>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <memballoon model='virtio'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <stats period='10'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='balloon0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </memballoon>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <rng model='virtio'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <backend model='random'>/dev/urandom</backend>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='rng0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </rng>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </devices>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <label>system_u:system_r:svirt_t:s0:c229,c879</label>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c229,c879</imagelabel>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </seclabel>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <label>+107:+107</label>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <imagelabel>+107:+107</imagelabel>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </seclabel>
Nov 25 09:58:19 compute-0 nova_compute[253512]: </domain>
Nov 25 09:58:19 compute-0 nova_compute[253512]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.212 253516 INFO nova.virt.libvirt.driver [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully detached device tapb3599bd2-09 from instance e414c01f-d327-411b-9309-c4c4dabd5b4a from the persistent domain config.
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.212 253516 DEBUG nova.virt.libvirt.driver [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] (1/8): Attempting to detach device tapb3599bd2-09 with device alias net1 from instance e414c01f-d327-411b-9309-c4c4dabd5b4a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.213 253516 DEBUG nova.virt.libvirt.guest [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] detach device xml: <interface type="ethernet">
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <mac address="fa:16:3e:40:3a:8c"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <model type="virtio"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <mtu size="1442"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <target dev="tapb3599bd2-09"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]: </interface>
Nov 25 09:58:19 compute-0 nova_compute[253512]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 09:58:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:19 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb704006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:19 compute-0 kernel: tapb3599bd2-09 (unregistering): left promiscuous mode
Nov 25 09:58:19 compute-0 NetworkManager[48903]: <info>  [1764064699.2652] device (tapb3599bd2-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:58:19 compute-0 ovn_controller[155020]: 2025-11-25T09:58:19Z|00062|binding|INFO|Releasing lport b3599bd2-09f9-4143-abc8-745915f961e3 from this chassis (sb_readonly=0)
Nov 25 09:58:19 compute-0 ovn_controller[155020]: 2025-11-25T09:58:19Z|00063|binding|INFO|Setting lport b3599bd2-09f9-4143-abc8-745915f961e3 down in Southbound
Nov 25 09:58:19 compute-0 ovn_controller[155020]: 2025-11-25T09:58:19Z|00064|binding|INFO|Removing iface tapb3599bd2-09 ovn-installed in OVS
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.269 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.273 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:3a:8c 10.100.0.24', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': 'e414c01f-d327-411b-9309-c4c4dabd5b4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e3e0a9c-90d8-4bb2-a9a5-b8401547fa81, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=b3599bd2-09f9-4143-abc8-745915f961e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.274 164791 INFO neutron.agent.ovn.metadata.agent [-] Port b3599bd2-09f9-4143-abc8-745915f961e3 in datapath 23a0542a-b85d-40e7-8bd9-6ee0d43b0306 unbound from our chassis
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.274 164791 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 23a0542a-b85d-40e7-8bd9-6ee0d43b0306, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.279 253516 DEBUG nova.virt.libvirt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Received event <DeviceRemovedEvent: 1764064699.2785373, e414c01f-d327-411b-9309-c4c4dabd5b4a => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.280 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[8f42d48e-e426-4148-935f-1146a29609d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.280 164791 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306 namespace which is not needed anymore
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.289 253516 DEBUG nova.virt.libvirt.driver [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Start waiting for the detach event from libvirt for device tapb3599bd2-09 with device alias net1 for instance e414c01f-d327-411b-9309-c4c4dabd5b4a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.290 253516 DEBUG nova.virt.libvirt.guest [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:40:3a:8c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3599bd2-09"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.290 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.298 253516 DEBUG nova.virt.libvirt.guest [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:40:3a:8c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3599bd2-09"/></interface>not found in domain: <domain type='kvm' id='3'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <name>instance-00000006</name>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <uuid>e414c01f-d327-411b-9309-c4c4dabd5b4a</uuid>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <metadata>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:name>tempest-TestNetworkBasicOps-server-2009582697</nova:name>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:creationTime>2025-11-25 09:56:52</nova:creationTime>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:flavor name="m1.nano">
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:memory>128</nova:memory>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:disk>1</nova:disk>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:swap>0</nova:swap>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:vcpus>1</nova:vcpus>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </nova:flavor>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:owner>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </nova:owner>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:ports>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:port uuid="7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82">
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </nova:port>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:port uuid="b3599bd2-09f9-4143-abc8-745915f961e3">
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </nova:port>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </nova:ports>
Nov 25 09:58:19 compute-0 nova_compute[253512]: </nova:instance>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </metadata>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <memory unit='KiB'>131072</memory>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <vcpu placement='static'>1</vcpu>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <resource>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <partition>/machine</partition>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </resource>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <sysinfo type='smbios'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <system>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <entry name='manufacturer'>RDO</entry>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <entry name='serial'>e414c01f-d327-411b-9309-c4c4dabd5b4a</entry>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <entry name='uuid'>e414c01f-d327-411b-9309-c4c4dabd5b4a</entry>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <entry name='family'>Virtual Machine</entry>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </system>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </sysinfo>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <os>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <boot dev='hd'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <smbios mode='sysinfo'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </os>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <features>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <acpi/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <apic/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <vmcoreinfo state='on'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </features>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <model fallback='forbid'>EPYC-Milan</model>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <vendor>AMD</vendor>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='x2apic'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='hypervisor'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='vaes'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='vpclmulqdq'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='stibp'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='ssbd'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='overflow-recov'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='succor'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='lbrv'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='pause-filter'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='v-vmsave-vmload'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='vgif'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='svm'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='require' name='topoext'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='npt'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='nrip-save'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </cpu>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <clock offset='utc'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <timer name='hpet' present='no'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </clock>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <on_poweroff>destroy</on_poweroff>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <on_reboot>restart</on_reboot>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <on_crash>destroy</on_crash>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <devices>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <disk type='network' device='disk'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <auth username='openstack'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <secret type='ceph' uuid='af1c9ae3-08d7-5547-a53d-2cccf7c6ef90'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <source protocol='rbd' name='vms/e414c01f-d327-411b-9309-c4c4dabd5b4a_disk' index='2'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <host name='192.168.122.100' port='6789'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <host name='192.168.122.102' port='6789'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <host name='192.168.122.101' port='6789'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       </source>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target dev='vda' bus='virtio'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='virtio-disk0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <disk type='network' device='cdrom'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <auth username='openstack'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <secret type='ceph' uuid='af1c9ae3-08d7-5547-a53d-2cccf7c6ef90'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <source protocol='rbd' name='vms/e414c01f-d327-411b-9309-c4c4dabd5b4a_disk.config' index='1'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <host name='192.168.122.100' port='6789'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <host name='192.168.122.102' port='6789'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <host name='192.168.122.101' port='6789'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       </source>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target dev='sda' bus='sata'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <readonly/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='sata0-0-0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pcie.0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='1' port='0x10'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='2' port='0x11'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.2'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='3' port='0x12'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.3'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='4' port='0x13'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.4'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='5' port='0x14'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.5'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='6' port='0x15'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.6'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='7' port='0x16'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.7'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='8' port='0x17'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.8'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='9' port='0x18'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.9'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='10' port='0x19'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.10'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='11' port='0x1a'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.11'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='12' port='0x1b'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.12'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='13' port='0x1c'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.13'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='14' port='0x1d'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.14'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='15' port='0x1e'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.15'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='16' port='0x1f'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.16'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='17' port='0x20'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.17'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='18' port='0x21'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.18'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='19' port='0x22'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.19'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='20' port='0x23'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.20'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='21' port='0x24'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.21'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='22' port='0x25'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.22'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='23' port='0x26'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.23'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='24' port='0x27'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.24'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-root-port'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target chassis='25' port='0x28'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.25'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model name='pcie-pci-bridge'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='pci.26'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='usb'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <controller type='sata' index='0'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='ide'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </controller>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <interface type='ethernet'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <mac address='fa:16:3e:03:f5:2a'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target dev='tap7f3b9b60-a3'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model type='virtio'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <mtu size='1442'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='net0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </interface>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <serial type='pty'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <source path='/dev/pts/0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <log file='/var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a/console.log' append='off'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target type='isa-serial' port='0'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:         <model name='isa-serial'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       </target>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='serial0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </serial>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <source path='/dev/pts/0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <log file='/var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a/console.log' append='off'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <target type='serial' port='0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='serial0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </console>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <input type='tablet' bus='usb'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='input0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='usb' bus='0' port='1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </input>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <input type='mouse' bus='ps2'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='input1'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </input>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <input type='keyboard' bus='ps2'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='input2'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </input>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <listen type='address' address='::0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </graphics>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <audio id='1' type='none'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <video>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='video0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </video>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <watchdog model='itco' action='reset'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='watchdog0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </watchdog>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <memballoon model='virtio'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <stats period='10'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='balloon0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </memballoon>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <rng model='virtio'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <backend model='random'>/dev/urandom</backend>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <alias name='rng0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </rng>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </devices>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <label>system_u:system_r:svirt_t:s0:c229,c879</label>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c229,c879</imagelabel>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </seclabel>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <label>+107:+107</label>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <imagelabel>+107:+107</imagelabel>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </seclabel>
Nov 25 09:58:19 compute-0 nova_compute[253512]: </domain>
Nov 25 09:58:19 compute-0 nova_compute[253512]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.298 253516 INFO nova.virt.libvirt.driver [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully detached device tapb3599bd2-09 from instance e414c01f-d327-411b-9309-c4c4dabd5b4a from the live domain config.
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.299 253516 DEBUG nova.virt.libvirt.vif [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2009582697',display_name='tempest-TestNetworkBasicOps-server-2009582697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2009582697',id=6,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD21EiYZKhXbIpyaNEUjP1ulP9c0zDwkxr0Xxe9kxy5T7Kh/aZqrRNdEYeVYyDq7wYIqSwgggji3NCoHXpcuxZfFxnprvDIJCcOEcX/dIdfv+vRs+aEB3wFMQZGt8WdE2g==',key_name='tempest-TestNetworkBasicOps-1281314821',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:56:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-36v3wqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:56:29Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=e414c01f-d327-411b-9309-c4c4dabd5b4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3599bd2-09f9-4143-abc8-745915f961e3", "address": "fa:16:3e:40:3a:8c", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3599bd2-09", "ovs_interfaceid": "b3599bd2-09f9-4143-abc8-745915f961e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.299 253516 DEBUG nova.network.os_vif_util [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "b3599bd2-09f9-4143-abc8-745915f961e3", "address": "fa:16:3e:40:3a:8c", "network": {"id": "23a0542a-b85d-40e7-8bd9-6ee0d43b0306", "bridge": "br-int", "label": "tempest-network-smoke--806543765", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3599bd2-09", "ovs_interfaceid": "b3599bd2-09f9-4143-abc8-745915f961e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.299 253516 DEBUG nova.network.os_vif_util [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=b3599bd2-09f9-4143-abc8-745915f961e3,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3599bd2-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.300 253516 DEBUG os_vif [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=b3599bd2-09f9-4143-abc8-745915f961e3,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3599bd2-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.303 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.303 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3599bd2-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.304 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.306 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.308 253516 INFO os_vif [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=b3599bd2-09f9-4143-abc8-745915f961e3,network=Network(23a0542a-b85d-40e7-8bd9-6ee0d43b0306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3599bd2-09')
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.308 253516 DEBUG nova.virt.libvirt.guest [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:name>tempest-TestNetworkBasicOps-server-2009582697</nova:name>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:creationTime>2025-11-25 09:58:19</nova:creationTime>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:flavor name="m1.nano">
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:memory>128</nova:memory>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:disk>1</nova:disk>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:swap>0</nova:swap>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:vcpus>1</nova:vcpus>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </nova:flavor>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:owner>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </nova:owner>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   <nova:ports>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     <nova:port uuid="7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82">
Nov 25 09:58:19 compute-0 nova_compute[253512]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 09:58:19 compute-0 nova_compute[253512]:     </nova:port>
Nov 25 09:58:19 compute-0 nova_compute[253512]:   </nova:ports>
Nov 25 09:58:19 compute-0 nova_compute[253512]: </nova:instance>
Nov 25 09:58:19 compute-0 nova_compute[253512]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 09:58:19 compute-0 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[264523]: [NOTICE]   (264527) : haproxy version is 2.8.14-c23fe91
Nov 25 09:58:19 compute-0 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[264523]: [NOTICE]   (264527) : path to executable is /usr/sbin/haproxy
Nov 25 09:58:19 compute-0 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[264523]: [WARNING]  (264527) : Exiting Master process...
Nov 25 09:58:19 compute-0 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[264523]: [ALERT]    (264527) : Current worker (264529) exited with code 143 (Terminated)
Nov 25 09:58:19 compute-0 neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306[264523]: [WARNING]  (264527) : All workers exited. Exiting... (0)
Nov 25 09:58:19 compute-0 systemd[1]: libpod-9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e.scope: Deactivated successfully.
Nov 25 09:58:19 compute-0 podman[266206]: 2025-11-25 09:58:19.402409217 +0000 UTC m=+0.036843920 container died 9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 09:58:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e-userdata-shm.mount: Deactivated successfully.
Nov 25 09:58:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b7f83b9c37fe676da180aa03b11c32fbf080c4a7d5bf6588d8cec27b6712b6f-merged.mount: Deactivated successfully.
Nov 25 09:58:19 compute-0 podman[266206]: 2025-11-25 09:58:19.427941502 +0000 UTC m=+0.062376205 container cleanup 9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 09:58:19 compute-0 systemd[1]: libpod-conmon-9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e.scope: Deactivated successfully.
Nov 25 09:58:19 compute-0 podman[266230]: 2025-11-25 09:58:19.470669595 +0000 UTC m=+0.026163375 container remove 9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.476 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[79abf730-6f35-4ab8-ba96-69fc023fba2a]: (4, ('Tue Nov 25 09:58:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306 (9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e)\n9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e\nTue Nov 25 09:58:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306 (9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e)\n9d61b247c70c0cc732d6579ed5d75e515741eea9fecb752fdc1f9e805064d98e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.478 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5ff3b9-7f02-4907-8464-da191b69b343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.479 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23a0542a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.481 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:19 compute-0 kernel: tap23a0542a-b0: left promiscuous mode
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.497 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.499 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.502 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[caaaf94a-2018-4b7f-945c-91c8f2914603]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.511 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9d4e13-1061-44b5-8e07-fa230266fc82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.512 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc6d468-40e2-43af-bccb-c827eeaca8e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.525 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[97d32dc3-731c-4c87-b366-1cfe405b867d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 339337, 'reachable_time': 30104, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266241, 'error': None, 'target': 'ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d23a0542a\x2db85d\x2d40e7\x2d8bd9\x2d6ee0d43b0306.mount: Deactivated successfully.
Nov 25 09:58:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v813: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 2.3 KiB/s wr, 0 op/s
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.529 164901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-23a0542a-b85d-40e7-8bd9-6ee0d43b0306 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:58:19 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:19.529 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[40d65a6a-8783-4f3f-851b-92a5ac3c5022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.719 253516 DEBUG nova.compute.manager [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-vif-unplugged-b3599bd2-09f9-4143-abc8-745915f961e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.720 253516 DEBUG oslo_concurrency.lockutils [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.720 253516 DEBUG oslo_concurrency.lockutils [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.720 253516 DEBUG oslo_concurrency.lockutils [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.720 253516 DEBUG nova.compute.manager [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] No waiting events found dispatching network-vif-unplugged-b3599bd2-09f9-4143-abc8-745915f961e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.721 253516 WARNING nova.compute.manager [req-33af0af4-98e3-496f-a08d-935b5e0cb776 req-f15ec873-5a8a-4c66-b737-44a6919199e9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received unexpected event network-vif-unplugged-b3599bd2-09f9-4143-abc8-745915f961e3 for instance with vm_state active and task_state None.
Nov 25 09:58:19 compute-0 nova_compute[253512]: 2025-11-25 09:58:19.735 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:19 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:19 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:20.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:58:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 25 09:58:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:58:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 25 09:58:20 compute-0 nova_compute[253512]: 2025-11-25 09:58:20.305 253516 DEBUG oslo_concurrency.lockutils [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:58:20 compute-0 nova_compute[253512]: 2025-11-25 09:58:20.305 253516 DEBUG oslo_concurrency.lockutils [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:58:20 compute-0 nova_compute[253512]: 2025-11-25 09:58:20.305 253516 DEBUG nova.network.neutron [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:58:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:20.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:20 compute-0 ceph-mon[74207]: pgmap v813: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 2.3 KiB/s wr, 0 op/s
Nov 25 09:58:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1899497758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/323938496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:20 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:21 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:21 compute-0 ovn_controller[155020]: 2025-11-25T09:58:21Z|00065|binding|INFO|Releasing lport 1198a2e0-5a95-4f4d-8225-c7b2e30ebbe1 from this chassis (sb_readonly=0)
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.422 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v814: 337 pgs: 337 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.688 253516 INFO nova.network.neutron [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Port b3599bd2-09f9-4143-abc8-745915f961e3 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.688 253516 DEBUG nova.network.neutron [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updating instance_info_cache with network_info: [{"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.707 253516 DEBUG oslo_concurrency.lockutils [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.720 253516 DEBUG oslo_concurrency.lockutils [None req-19f5f29d-a4cf-45b4-9694-bbe199668519 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "interface-e414c01f-d327-411b-9309-c4c4dabd5b4a-b3599bd2-09f9-4143-abc8-745915f961e3" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:21 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb704006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.814 253516 DEBUG nova.compute.manager [req-22b6687f-7683-449d-a1c9-1dd458ce4e65 req-e5b4ef09-0da1-46e6-aeb0-694c91fb5ab2 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-vif-plugged-b3599bd2-09f9-4143-abc8-745915f961e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.814 253516 DEBUG oslo_concurrency.lockutils [req-22b6687f-7683-449d-a1c9-1dd458ce4e65 req-e5b4ef09-0da1-46e6-aeb0-694c91fb5ab2 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.814 253516 DEBUG oslo_concurrency.lockutils [req-22b6687f-7683-449d-a1c9-1dd458ce4e65 req-e5b4ef09-0da1-46e6-aeb0-694c91fb5ab2 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.815 253516 DEBUG oslo_concurrency.lockutils [req-22b6687f-7683-449d-a1c9-1dd458ce4e65 req-e5b4ef09-0da1-46e6-aeb0-694c91fb5ab2 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.815 253516 DEBUG nova.compute.manager [req-22b6687f-7683-449d-a1c9-1dd458ce4e65 req-e5b4ef09-0da1-46e6-aeb0-694c91fb5ab2 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] No waiting events found dispatching network-vif-plugged-b3599bd2-09f9-4143-abc8-745915f961e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.815 253516 WARNING nova.compute.manager [req-22b6687f-7683-449d-a1c9-1dd458ce4e65 req-e5b4ef09-0da1-46e6-aeb0-694c91fb5ab2 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received unexpected event network-vif-plugged-b3599bd2-09f9-4143-abc8-745915f961e3 for instance with vm_state active and task_state None.
Nov 25 09:58:21 compute-0 nova_compute[253512]: 2025-11-25 09:58:21.815 253516 DEBUG nova.compute.manager [req-22b6687f-7683-449d-a1c9-1dd458ce4e65 req-e5b4ef09-0da1-46e6-aeb0-694c91fb5ab2 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-vif-deleted-b3599bd2-09f9-4143-abc8-745915f961e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:58:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:22.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:22.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.479 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.485 253516 DEBUG oslo_concurrency.lockutils [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "e414c01f-d327-411b-9309-c4c4dabd5b4a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.485 253516 DEBUG oslo_concurrency.lockutils [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.486 253516 DEBUG oslo_concurrency.lockutils [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.486 253516 DEBUG oslo_concurrency.lockutils [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.486 253516 DEBUG oslo_concurrency.lockutils [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.487 253516 INFO nova.compute.manager [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Terminating instance
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.488 253516 DEBUG nova.compute.manager [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:58:22 compute-0 sudo[266246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:58:22 compute-0 sudo[266246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:22 compute-0 sudo[266246]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:22 compute-0 kernel: tap7f3b9b60-a3 (unregistering): left promiscuous mode
Nov 25 09:58:22 compute-0 NetworkManager[48903]: <info>  [1764064702.5260] device (tap7f3b9b60-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.535 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-0 ovn_controller[155020]: 2025-11-25T09:58:22Z|00066|binding|INFO|Releasing lport 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 from this chassis (sb_readonly=0)
Nov 25 09:58:22 compute-0 ovn_controller[155020]: 2025-11-25T09:58:22Z|00067|binding|INFO|Setting lport 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 down in Southbound
Nov 25 09:58:22 compute-0 ovn_controller[155020]: 2025-11-25T09:58:22Z|00068|binding|INFO|Removing iface tap7f3b9b60-a3 ovn-installed in OVS
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.537 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.552 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:f5:2a 10.100.0.6'], port_security=['fa:16:3e:03:f5:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e414c01f-d327-411b-9309-c4c4dabd5b4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da31a90-4851-4e23-b49c-d37e40c75813', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb028197-733c-4fbd-bd01-615e4c545aa9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09fba177-1b7b-4e1a-96ee-300569eeb103, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.553 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 in datapath 1da31a90-4851-4e23-b49c-d37e40c75813 unbound from our chassis
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.554 164791 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1da31a90-4851-4e23-b49c-d37e40c75813, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.555 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[35d4744a-2d82-4e02-b29b-296687e41b81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.555 164791 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813 namespace which is not needed anymore
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.561 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 25 09:58:22 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 13.661s CPU time.
Nov 25 09:58:22 compute-0 systemd-machined[216497]: Machine qemu-3-instance-00000006 terminated.
Nov 25 09:58:22 compute-0 ceph-mon[74207]: pgmap v814: 337 pgs: 337 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 25 09:58:22 compute-0 neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813[264358]: [NOTICE]   (264362) : haproxy version is 2.8.14-c23fe91
Nov 25 09:58:22 compute-0 neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813[264358]: [NOTICE]   (264362) : path to executable is /usr/sbin/haproxy
Nov 25 09:58:22 compute-0 neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813[264358]: [WARNING]  (264362) : Exiting Master process...
Nov 25 09:58:22 compute-0 neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813[264358]: [WARNING]  (264362) : Exiting Master process...
Nov 25 09:58:22 compute-0 neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813[264358]: [ALERT]    (264362) : Current worker (264364) exited with code 143 (Terminated)
Nov 25 09:58:22 compute-0 neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813[264358]: [WARNING]  (264362) : All workers exited. Exiting... (0)
Nov 25 09:58:22 compute-0 systemd[1]: libpod-67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7.scope: Deactivated successfully.
Nov 25 09:58:22 compute-0 podman[266291]: 2025-11-25 09:58:22.654771598 +0000 UTC m=+0.033848564 container died 67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 09:58:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7-userdata-shm.mount: Deactivated successfully.
Nov 25 09:58:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2027286fa675275eb4f949d66488a7381f6c7906a3492c20208baab7bd42706c-merged.mount: Deactivated successfully.
Nov 25 09:58:22 compute-0 podman[266291]: 2025-11-25 09:58:22.679287646 +0000 UTC m=+0.058364613 container cleanup 67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:58:22 compute-0 systemd[1]: libpod-conmon-67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7.scope: Deactivated successfully.
Nov 25 09:58:22 compute-0 kernel: tap7f3b9b60-a3: entered promiscuous mode
Nov 25 09:58:22 compute-0 systemd-udevd[266274]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:58:22 compute-0 kernel: tap7f3b9b60-a3 (unregistering): left promiscuous mode
Nov 25 09:58:22 compute-0 NetworkManager[48903]: <info>  [1764064702.7015] manager: (tap7f3b9b60-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.703 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-0 ovn_controller[155020]: 2025-11-25T09:58:22Z|00069|binding|INFO|Claiming lport 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 for this chassis.
Nov 25 09:58:22 compute-0 ovn_controller[155020]: 2025-11-25T09:58:22Z|00070|binding|INFO|7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82: Claiming fa:16:3e:03:f5:2a 10.100.0.6
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.721 253516 INFO nova.virt.libvirt.driver [-] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Instance destroyed successfully.
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.722 253516 DEBUG nova.objects.instance [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid e414c01f-d327-411b-9309-c4c4dabd5b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.723 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:f5:2a 10.100.0.6'], port_security=['fa:16:3e:03:f5:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e414c01f-d327-411b-9309-c4c4dabd5b4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da31a90-4851-4e23-b49c-d37e40c75813', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb028197-733c-4fbd-bd01-615e4c545aa9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09fba177-1b7b-4e1a-96ee-300569eeb103, chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.728 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-0 ovn_controller[155020]: 2025-11-25T09:58:22Z|00071|binding|INFO|Setting lport 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 ovn-installed in OVS
Nov 25 09:58:22 compute-0 ovn_controller[155020]: 2025-11-25T09:58:22Z|00072|binding|INFO|Setting lport 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 up in Southbound
Nov 25 09:58:22 compute-0 ovn_controller[155020]: 2025-11-25T09:58:22Z|00073|binding|INFO|Releasing lport 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 from this chassis (sb_readonly=1)
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.730 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-0 ovn_controller[155020]: 2025-11-25T09:58:22Z|00074|binding|INFO|Removing iface tap7f3b9b60-a3 ovn-installed in OVS
Nov 25 09:58:22 compute-0 ovn_controller[155020]: 2025-11-25T09:58:22Z|00075|if_status|INFO|Not setting lport 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 down as sb is readonly
Nov 25 09:58:22 compute-0 ovn_controller[155020]: 2025-11-25T09:58:22Z|00076|binding|INFO|Releasing lport 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 from this chassis (sb_readonly=0)
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.732 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-0 ovn_controller[155020]: 2025-11-25T09:58:22Z|00077|binding|INFO|Setting lport 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 down in Southbound
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.734 253516 DEBUG nova.virt.libvirt.vif [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2009582697',display_name='tempest-TestNetworkBasicOps-server-2009582697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2009582697',id=6,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD21EiYZKhXbIpyaNEUjP1ulP9c0zDwkxr0Xxe9kxy5T7Kh/aZqrRNdEYeVYyDq7wYIqSwgggji3NCoHXpcuxZfFxnprvDIJCcOEcX/dIdfv+vRs+aEB3wFMQZGt8WdE2g==',key_name='tempest-TestNetworkBasicOps-1281314821',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:56:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-36v3wqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:56:29Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=e414c01f-d327-411b-9309-c4c4dabd5b4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.734 253516 DEBUG nova.network.os_vif_util [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "address": "fa:16:3e:03:f5:2a", "network": {"id": "1da31a90-4851-4e23-b49c-d37e40c75813", "bridge": "br-int", "label": "tempest-network-smoke--1968340819", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f3b9b60-a3", "ovs_interfaceid": "7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.734 253516 DEBUG nova.network.os_vif_util [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:f5:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82,network=Network(1da31a90-4851-4e23-b49c-d37e40c75813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f3b9b60-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.735 253516 DEBUG os_vif [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:f5:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82,network=Network(1da31a90-4851-4e23-b49c-d37e40c75813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f3b9b60-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.736 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.736 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f3b9b60-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.738 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.739 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.742 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:f5:2a 10.100.0.6'], port_security=['fa:16:3e:03:f5:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e414c01f-d327-411b-9309-c4c4dabd5b4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da31a90-4851-4e23-b49c-d37e40c75813', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb028197-733c-4fbd-bd01-615e4c545aa9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09fba177-1b7b-4e1a-96ee-300569eeb103, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:58:22 compute-0 podman[266315]: 2025-11-25 09:58:22.744864654 +0000 UTC m=+0.046541664 container remove 67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.749 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.750 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[ad295c54-6fb9-408c-82d6-4ba007dd294a]: (4, ('Tue Nov 25 09:58:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813 (67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7)\n67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7\nTue Nov 25 09:58:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813 (67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7)\n67c62e8aa23d8394c744564b31280a5425c93999636f32faa81c3f2c91b859e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.751 253516 INFO os_vif [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:f5:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82,network=Network(1da31a90-4851-4e23-b49c-d37e40c75813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f3b9b60-a3')
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.753 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[51b9d46f-c886-4354-a31d-23cbf37cbb9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.753 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1da31a90-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:58:22 compute-0 kernel: tap1da31a90-40: left promiscuous mode
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.765 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.770 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.773 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[7b640880-18a3-4374-b4e7-48d77b6035b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.788 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c0229f-b539-430a-9183-dde2995be4cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.788 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[debd7be3-6459-425f-b3c5-d039d9acf7b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.801 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[99ecd60d-bafe-4342-9a7c-a1faddb52f49]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 336955, 'reachable_time': 23692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266348, 'error': None, 'target': 'ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d1da31a90\x2d4851\x2d4e23\x2db49c\x2dd37e40c75813.mount: Deactivated successfully.
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.805 164901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1da31a90-4851-4e23-b49c-d37e40c75813 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.805 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[142251f2-ee1c-4282-9a92-5d9f1662267c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.807 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 in datapath 1da31a90-4851-4e23-b49c-d37e40c75813 unbound from our chassis
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.808 164791 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1da31a90-4851-4e23-b49c-d37e40c75813, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.809 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[c14303ac-5d9f-4a13-8a0a-9f1a22a83d26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.810 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 in datapath 1da31a90-4851-4e23-b49c-d37e40c75813 unbound from our chassis
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.811 164791 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1da31a90-4851-4e23-b49c-d37e40c75813, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:58:22 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:22.811 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[91189760-6749-428f-aca0-dacf9c18e3b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.916 253516 INFO nova.virt.libvirt.driver [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Deleting instance files /var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a_del
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.916 253516 INFO nova.virt.libvirt.driver [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Deletion of /var/lib/nova/instances/e414c01f-d327-411b-9309-c4c4dabd5b4a_del complete
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.957 253516 INFO nova.compute.manager [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Took 0.47 seconds to destroy the instance on the hypervisor.
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.957 253516 DEBUG oslo.service.loopingcall [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.958 253516 DEBUG nova.compute.manager [-] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:58:22 compute-0 nova_compute[253512]: 2025-11-25 09:58:22.958 253516 DEBUG nova.network.neutron [-] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:58:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:22 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:22 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:23 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7280031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.467 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.483 253516 DEBUG nova.compute.manager [req-da75c438-3c05-4bad-90aa-c2d968348bd5 req-3a171110-814a-478a-bdbd-96619012802a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-vif-unplugged-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.483 253516 DEBUG oslo_concurrency.lockutils [req-da75c438-3c05-4bad-90aa-c2d968348bd5 req-3a171110-814a-478a-bdbd-96619012802a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.483 253516 DEBUG oslo_concurrency.lockutils [req-da75c438-3c05-4bad-90aa-c2d968348bd5 req-3a171110-814a-478a-bdbd-96619012802a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.483 253516 DEBUG oslo_concurrency.lockutils [req-da75c438-3c05-4bad-90aa-c2d968348bd5 req-3a171110-814a-478a-bdbd-96619012802a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.483 253516 DEBUG nova.compute.manager [req-da75c438-3c05-4bad-90aa-c2d968348bd5 req-3a171110-814a-478a-bdbd-96619012802a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] No waiting events found dispatching network-vif-unplugged-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.483 253516 DEBUG nova.compute.manager [req-da75c438-3c05-4bad-90aa-c2d968348bd5 req-3a171110-814a-478a-bdbd-96619012802a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-vif-unplugged-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:58:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v815: 337 pgs: 337 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 25 09:58:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2920216429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.760 253516 DEBUG nova.network.neutron [-] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.776 253516 INFO nova.compute.manager [-] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Took 0.82 seconds to deallocate network for instance.
Nov 25 09:58:23 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:23 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.808 253516 DEBUG oslo_concurrency.lockutils [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.808 253516 DEBUG oslo_concurrency.lockutils [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.900 253516 DEBUG nova.scheduler.client.report [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Refreshing inventories for resource provider d9873737-caae-40cc-9346-77a33537057c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.911 253516 DEBUG nova.compute.manager [req-aaf04eb8-3856-4afc-ba48-6966377ec848 req-5ed04a7e-d2ee-4f3a-8545-6ebcba70d200 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-changed-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.911 253516 DEBUG nova.compute.manager [req-aaf04eb8-3856-4afc-ba48-6966377ec848 req-5ed04a7e-d2ee-4f3a-8545-6ebcba70d200 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Refreshing instance network info cache due to event network-changed-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.912 253516 DEBUG oslo_concurrency.lockutils [req-aaf04eb8-3856-4afc-ba48-6966377ec848 req-5ed04a7e-d2ee-4f3a-8545-6ebcba70d200 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.912 253516 DEBUG oslo_concurrency.lockutils [req-aaf04eb8-3856-4afc-ba48-6966377ec848 req-5ed04a7e-d2ee-4f3a-8545-6ebcba70d200 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.912 253516 DEBUG nova.network.neutron [req-aaf04eb8-3856-4afc-ba48-6966377ec848 req-5ed04a7e-d2ee-4f3a-8545-6ebcba70d200 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Refreshing network info cache for port 7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.981 253516 DEBUG nova.scheduler.client.report [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Updating ProviderTree inventory for provider d9873737-caae-40cc-9346-77a33537057c from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 09:58:23 compute-0 nova_compute[253512]: 2025-11-25 09:58:23.982 253516 DEBUG nova.compute.provider_tree [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Updating inventory in ProviderTree for provider d9873737-caae-40cc-9346-77a33537057c with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 09:58:23 compute-0 podman[266352]: 2025-11-25 09:58:23.990615528 +0000 UTC m=+0.044027263 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 09:58:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:24.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.221 253516 DEBUG nova.scheduler.client.report [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Refreshing aggregate associations for resource provider d9873737-caae-40cc-9346-77a33537057c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.246 253516 DEBUG nova.scheduler.client.report [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Refreshing trait associations for resource provider d9873737-caae-40cc-9346-77a33537057c, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX512VPCLMULQDQ,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX512VAES,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.266 253516 DEBUG nova.network.neutron [req-aaf04eb8-3856-4afc-ba48-6966377ec848 req-5ed04a7e-d2ee-4f3a-8545-6ebcba70d200 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.282 253516 DEBUG oslo_concurrency.processutils [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:58:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:24.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.470 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.492 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.537 253516 DEBUG nova.network.neutron [req-aaf04eb8-3856-4afc-ba48-6966377ec848 req-5ed04a7e-d2ee-4f3a-8545-6ebcba70d200 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.549 253516 DEBUG oslo_concurrency.lockutils [req-aaf04eb8-3856-4afc-ba48-6966377ec848 req-5ed04a7e-d2ee-4f3a-8545-6ebcba70d200 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-e414c01f-d327-411b-9309-c4c4dabd5b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.549 253516 DEBUG nova.compute.manager [req-aaf04eb8-3856-4afc-ba48-6966377ec848 req-5ed04a7e-d2ee-4f3a-8545-6ebcba70d200 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-vif-deleted-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:58:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:58:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1288411704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:24 compute-0 ceph-mon[74207]: pgmap v815: 337 pgs: 337 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 25 09:58:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3779375704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.626 253516 DEBUG oslo_concurrency.processutils [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.630 253516 DEBUG nova.compute.provider_tree [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.638 253516 DEBUG nova.scheduler.client.report [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.649 253516 DEBUG oslo_concurrency.lockutils [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.650 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.651 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.651 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.651 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.684 253516 INFO nova.scheduler.client.report [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance e414c01f-d327-411b-9309-c4c4dabd5b4a
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.733 253516 DEBUG oslo_concurrency.lockutils [None req-d45f59d2-a179-4467-9eba-b8654525f84d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.737 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:24 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:58:24.871 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:58:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:58:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3997797534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:24 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:24 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb704006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:24 compute-0 nova_compute[253512]: 2025-11-25 09:58:24.976 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.179 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.180 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4616MB free_disk=59.942447662353516GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.180 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.181 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.217 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.217 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.234 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:58:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:25 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v816: 337 pgs: 337 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 25 09:58:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:58:25 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4173911749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.555 253516 DEBUG nova.compute.manager [req-e1c4d089-a426-4e8b-a80e-4a17f136e380 req-b6ea9c4c-165f-4e92-a56e-60c85f5147f1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received event network-vif-plugged-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.556 253516 DEBUG oslo_concurrency.lockutils [req-e1c4d089-a426-4e8b-a80e-4a17f136e380 req-b6ea9c4c-165f-4e92-a56e-60c85f5147f1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.556 253516 DEBUG oslo_concurrency.lockutils [req-e1c4d089-a426-4e8b-a80e-4a17f136e380 req-b6ea9c4c-165f-4e92-a56e-60c85f5147f1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.557 253516 DEBUG oslo_concurrency.lockutils [req-e1c4d089-a426-4e8b-a80e-4a17f136e380 req-b6ea9c4c-165f-4e92-a56e-60c85f5147f1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "e414c01f-d327-411b-9309-c4c4dabd5b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.557 253516 DEBUG nova.compute.manager [req-e1c4d089-a426-4e8b-a80e-4a17f136e380 req-b6ea9c4c-165f-4e92-a56e-60c85f5147f1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] No waiting events found dispatching network-vif-plugged-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.557 253516 WARNING nova.compute.manager [req-e1c4d089-a426-4e8b-a80e-4a17f136e380 req-b6ea9c4c-165f-4e92-a56e-60c85f5147f1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Received unexpected event network-vif-plugged-7f3b9b60-a3eb-4679-9f5a-0e6eb66bda82 for instance with vm_state deleted and task_state None.
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.567 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.570 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.581 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.592 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:58:25 compute-0 nova_compute[253512]: 2025-11-25 09:58:25.592 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:58:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1288411704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3997797534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4173911749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:25 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7280031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:26.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:26.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:26 compute-0 nova_compute[253512]: 2025-11-25 09:58:26.592 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:26 compute-0 nova_compute[253512]: 2025-11-25 09:58:26.593 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:58:26 compute-0 nova_compute[253512]: 2025-11-25 09:58:26.593 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:58:26 compute-0 nova_compute[253512]: 2025-11-25 09:58:26.611 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:58:26 compute-0 nova_compute[253512]: 2025-11-25 09:58:26.611 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:26 compute-0 nova_compute[253512]: 2025-11-25 09:58:26.611 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:26 compute-0 nova_compute[253512]: 2025-11-25 09:58:26.611 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:26 compute-0 nova_compute[253512]: 2025-11-25 09:58:26.611 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:26 compute-0 nova_compute[253512]: 2025-11-25 09:58:26.611 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:58:26 compute-0 ceph-mon[74207]: pgmap v816: 337 pgs: 337 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 25 09:58:26 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:26 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:27.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:27.065Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:27.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:27.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:27 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb704006700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:27 compute-0 nova_compute[253512]: 2025-11-25 09:58:27.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:27 compute-0 nova_compute[253512]: 2025-11-25 09:58:27.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 09:58:27 compute-0 nova_compute[253512]: 2025-11-25 09:58:27.483 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 09:58:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v817: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Nov 25 09:58:27 compute-0 nova_compute[253512]: 2025-11-25 09:58:27.737 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:27 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:28.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:28.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:28 compute-0 ceph-mon[74207]: pgmap v817: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Nov 25 09:58:28 compute-0 nova_compute[253512]: 2025-11-25 09:58:28.642 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:28 compute-0 nova_compute[253512]: 2025-11-25 09:58:28.735 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:28 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 09:58:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:28 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7280040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:29 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v818: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.0 KiB/s wr, 56 op/s
Nov 25 09:58:29 compute-0 nova_compute[253512]: 2025-11-25 09:58:29.738 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:29 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:29 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7040068a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:58:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:58:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:30.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:58:30] "GET /metrics HTTP/1.1" 200 48449 "" "Prometheus/2.51.0"
Nov 25 09:58:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:58:30] "GET /metrics HTTP/1.1" 200 48449 "" "Prometheus/2.51.0"
Nov 25 09:58:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:30.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:30 compute-0 ceph-mon[74207]: pgmap v818: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.0 KiB/s wr, 56 op/s
Nov 25 09:58:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:58:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:30 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6f40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:31 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7280040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 09:58:31 compute-0 nova_compute[253512]: 2025-11-25 09:58:31.478 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v819: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.0 KiB/s wr, 56 op/s
Nov 25 09:58:31 compute-0 kernel: ganesha.nfsd[266181]: segfault at 50 ip 00007fb7a295432e sp 00007fb7767fb210 error 4 in libntirpc.so.5.8[7fb7a2939000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 25 09:58:31 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 09:58:31 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik[262942]: 25/11/2025 09:58:31 : epoch 69257d2d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6ec005770 fd 38 proxy ignored for local
Nov 25 09:58:31 compute-0 systemd[1]: Started Process Core Dump (PID 266444/UID 0).
Nov 25 09:58:31 compute-0 podman[266445]: 2025-11-25 09:58:31.871488241 +0000 UTC m=+0.059673360 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 09:58:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:32.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:32.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:32 compute-0 ceph-mon[74207]: pgmap v819: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.0 KiB/s wr, 56 op/s
Nov 25 09:58:32 compute-0 nova_compute[253512]: 2025-11-25 09:58:32.738 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:32 compute-0 systemd-coredump[266446]: Process 262946 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 63:
                                                    #0  0x00007fb7a295432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 25 09:58:32 compute-0 systemd[1]: systemd-coredump@13-266444-0.service: Deactivated successfully.
Nov 25 09:58:32 compute-0 podman[266474]: 2025-11-25 09:58:32.878128631 +0000 UTC m=+0.017180250 container died 1aa73363a44015985c0c74291440fe0491443ad902da42232bdda08b78cde9bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 09:58:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-66b476ebc4afaffb4a12c64d9aa6daae74df5dd9cebc157d7e529a5588d57c9e-merged.mount: Deactivated successfully.
Nov 25 09:58:32 compute-0 podman[266474]: 2025-11-25 09:58:32.89767484 +0000 UTC m=+0.036726449 container remove 1aa73363a44015985c0c74291440fe0491443ad902da42232bdda08b78cde9bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-2-0-compute-0-rychik, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:58:32 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Main process exited, code=exited, status=139/n/a
Nov 25 09:58:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:32 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:58:32 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Consumed 1.114s CPU time.
Nov 25 09:58:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v820: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:58:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:34.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:34.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:34 compute-0 ceph-mon[74207]: pgmap v820: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:58:34 compute-0 nova_compute[253512]: 2025-11-25 09:58:34.740 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:34 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095834 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:58:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v821: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:58:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:36.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:36.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:36 compute-0 ceph-mon[74207]: pgmap v821: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 09:58:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:37.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:37.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:37.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:37.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v822: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 25 09:58:37 compute-0 nova_compute[253512]: 2025-11-25 09:58:37.721 253516 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064702.7194757, e414c01f-d327-411b-9309-c4c4dabd5b4a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:58:37 compute-0 nova_compute[253512]: 2025-11-25 09:58:37.721 253516 INFO nova.compute.manager [-] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] VM Stopped (Lifecycle Event)
Nov 25 09:58:37 compute-0 nova_compute[253512]: 2025-11-25 09:58:37.739 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:37 compute-0 nova_compute[253512]: 2025-11-25 09:58:37.748 253516 DEBUG nova.compute.manager [None req-1805e5bf-152d-4550-83dc-9c8daf00e799 - - - - - -] [instance: e414c01f-d327-411b-9309-c4c4dabd5b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:58:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095837 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:58:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:37 compute-0 podman[266514]: 2025-11-25 09:58:37.977439809 +0000 UTC m=+0.041952893 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 09:58:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:38.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:38.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:38 compute-0 ceph-mon[74207]: pgmap v822: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 25 09:58:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v823: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:58:39 compute-0 nova_compute[253512]: 2025-11-25 09:58:39.744 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:40.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:58:40] "GET /metrics HTTP/1.1" 200 48449 "" "Prometheus/2.51.0"
Nov 25 09:58:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:58:40] "GET /metrics HTTP/1.1" 200 48449 "" "Prometheus/2.51.0"
Nov 25 09:58:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:40.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:40 compute-0 ceph-mon[74207]: pgmap v823: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:58:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v824: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:58:41 compute-0 nova_compute[253512]: 2025-11-25 09:58:41.660 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.685575) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721685613, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 694, "num_deletes": 251, "total_data_size": 1026922, "memory_usage": 1040856, "flush_reason": "Manual Compaction"}
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721689468, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1017328, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24063, "largest_seqno": 24756, "table_properties": {"data_size": 1013658, "index_size": 1514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8548, "raw_average_key_size": 19, "raw_value_size": 1006258, "raw_average_value_size": 2323, "num_data_blocks": 66, "num_entries": 433, "num_filter_entries": 433, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064673, "oldest_key_time": 1764064673, "file_creation_time": 1764064721, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 3965 microseconds, and 2917 cpu microseconds.
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.689546) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1017328 bytes OK
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.689559) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.690054) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.690065) EVENT_LOG_v1 {"time_micros": 1764064721690061, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.690074) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1023337, prev total WAL file size 1023337, number of live WAL files 2.
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.690831) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(993KB)], [53(12MB)]
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721690950, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14206375, "oldest_snapshot_seqno": -1}
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5398 keys, 12081233 bytes, temperature: kUnknown
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721723147, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12081233, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12046205, "index_size": 20454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 138447, "raw_average_key_size": 25, "raw_value_size": 11949655, "raw_average_value_size": 2213, "num_data_blocks": 829, "num_entries": 5398, "num_filter_entries": 5398, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764064721, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.723288) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12081233 bytes
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.723608) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 440.9 rd, 375.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.6 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(25.8) write-amplify(11.9) OK, records in: 5916, records dropped: 518 output_compression: NoCompression
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.723630) EVENT_LOG_v1 {"time_micros": 1764064721723616, "job": 28, "event": "compaction_finished", "compaction_time_micros": 32218, "compaction_time_cpu_micros": 28413, "output_level": 6, "num_output_files": 1, "total_output_size": 12081233, "num_input_records": 5916, "num_output_records": 5398, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721723816, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721725715, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.690692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.725731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.725734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.725735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.725736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:58:41 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-09:58:41.725737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 09:58:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:42.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:42.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:42 compute-0 sudo[266535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:58:42 compute-0 sudo[266535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:58:42 compute-0 sudo[266535]: pam_unix(sudo:session): session closed for user root
Nov 25 09:58:42 compute-0 ceph-mon[74207]: pgmap v824: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:58:42 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/110315522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:42 compute-0 nova_compute[253512]: 2025-11-25 09:58:42.741 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:43 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Scheduled restart job, restart counter is at 14.
Nov 25 09:58:43 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:58:43 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Consumed 1.114s CPU time.
Nov 25 09:58:43 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Start request repeated too quickly.
Nov 25 09:58:43 compute-0 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.2.0.compute-0.rychik.service: Failed with result 'exit-code'.
Nov 25 09:58:43 compute-0 systemd[1]: Failed to start Ceph nfs.cephfs.2.0.compute-0.rychik for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 09:58:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v825: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:58:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:44.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:44.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:44 compute-0 ceph-mon[74207]: pgmap v825: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:58:44 compute-0 nova_compute[253512]: 2025-11-25 09:58:44.746 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:58:44
Nov 25 09:58:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:58:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:58:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.nfs', 'default.rgw.control', 'vms', 'default.rgw.meta', '.rgw.root', '.mgr', 'images', 'cephfs.cephfs.data']
Nov 25 09:58:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:58:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:58:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:58:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:58:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:58:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:58:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:58:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:58:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:58:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:58:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:58:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:58:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:58:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:58:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:58:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:58:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:58:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:58:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:58:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v826: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:58:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:58:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:46.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:46.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:46 compute-0 ceph-mon[74207]: pgmap v826: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 25 09:58:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:47.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:47.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:47.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:47.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v827: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:58:47 compute-0 nova_compute[253512]: 2025-11-25 09:58:47.741 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:48.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:48.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:48 compute-0 ceph-mon[74207]: pgmap v827: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:58:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v828: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:58:49 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4089513043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:58:49 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/998264127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:58:49 compute-0 nova_compute[253512]: 2025-11-25 09:58:49.747 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:50.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:58:50] "GET /metrics HTTP/1.1" 200 48447 "" "Prometheus/2.51.0"
Nov 25 09:58:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:58:50] "GET /metrics HTTP/1.1" 200 48447 "" "Prometheus/2.51.0"
Nov 25 09:58:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:50.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:50 compute-0 ceph-mon[74207]: pgmap v828: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:58:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v829: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 09:58:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:52.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:52.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:52 compute-0 ceph-mon[74207]: pgmap v829: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 09:58:52 compute-0 nova_compute[253512]: 2025-11-25 09:58:52.743 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v830: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 09:58:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:54.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:54.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:54 compute-0 ceph-mon[74207]: pgmap v830: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 09:58:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/719374702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:58:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/719374702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:58:54 compute-0 nova_compute[253512]: 2025-11-25 09:58:54.749 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:54 compute-0 podman[266574]: 2025-11-25 09:58:54.969470835 +0000 UTC m=+0.035011637 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:58:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v831: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 09:58:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:56.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:58:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:56.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:58:56 compute-0 ceph-mon[74207]: pgmap v831: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 25 09:58:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:57.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:57.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:57.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:58:57.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:58:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v832: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 09:58:57 compute-0 nova_compute[253512]: 2025-11-25 09:58:57.743 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:57 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1528136379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:58:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:58:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:58.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:58:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:58:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:58.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:58:58 compute-0 ceph-mon[74207]: pgmap v832: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 09:58:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v833: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 09:58:59 compute-0 nova_compute[253512]: 2025-11-25 09:58:59.750 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:58:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:58:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:59:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:00.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:59:00] "GET /metrics HTTP/1.1" 200 48447 "" "Prometheus/2.51.0"
Nov 25 09:59:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:59:00] "GET /metrics HTTP/1.1" 200 48447 "" "Prometheus/2.51.0"
Nov 25 09:59:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:00.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:00 compute-0 ceph-mon[74207]: pgmap v833: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 09:59:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:59:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v834: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 09:59:02 compute-0 podman[266598]: 2025-11-25 09:59:02.009430416 +0000 UTC m=+0.064249688 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller)
Nov 25 09:59:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:02.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.233 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "f5a6cffa-7adc-4794-942d-377379b2d807" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.233 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.245 253516 DEBUG nova.compute.manager [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.311 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.311 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.352 253516 DEBUG nova.virt.hardware [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.353 253516 INFO nova.compute.claims [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.434 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:02.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:02 compute-0 sudo[266641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:59:02 compute-0 sudo[266641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:02 compute-0 sudo[266641]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.745 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:59:02 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236858694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.772 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.775 253516 DEBUG nova.compute.provider_tree [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.786 253516 DEBUG nova.scheduler.client.report [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:59:02 compute-0 ceph-mon[74207]: pgmap v834: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 09:59:02 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/236858694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.804 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.805 253516 DEBUG nova.compute.manager [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.854 253516 DEBUG nova.compute.manager [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.854 253516 DEBUG nova.network.neutron [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.885 253516 INFO nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.897 253516 DEBUG nova.compute.manager [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.970 253516 DEBUG nova.compute.manager [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.971 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.971 253516 INFO nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Creating image(s)
Nov 25 09:59:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:02 compute-0 nova_compute[253512]: 2025-11-25 09:59:02.989 253516 DEBUG nova.storage.rbd_utils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f5a6cffa-7adc-4794-942d-377379b2d807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.006 253516 DEBUG nova.storage.rbd_utils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f5a6cffa-7adc-4794-942d-377379b2d807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.023 253516 DEBUG nova.storage.rbd_utils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f5a6cffa-7adc-4794-942d-377379b2d807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.025 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.071 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.071 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.072 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.072 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.087 253516 DEBUG nova.storage.rbd_utils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f5a6cffa-7adc-4794-942d-377379b2d807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.089 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 f5a6cffa-7adc-4794-942d-377379b2d807_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.218 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 f5a6cffa-7adc-4794-942d-377379b2d807_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.260 253516 DEBUG nova.storage.rbd_utils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image f5a6cffa-7adc-4794-942d-377379b2d807_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.313 253516 DEBUG nova.objects.instance [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid f5a6cffa-7adc-4794-942d-377379b2d807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.330 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.331 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Ensure instance console log exists: /var/lib/nova/instances/f5a6cffa-7adc-4794-942d-377379b2d807/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.331 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.331 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.331 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:03 compute-0 nova_compute[253512]: 2025-11-25 09:59:03.448 253516 DEBUG nova.policy [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:59:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v835: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 80 op/s
Nov 25 09:59:03 compute-0 ovn_controller[155020]: 2025-11-25T09:59:03Z|00078|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 25 09:59:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:04.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:04.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:04 compute-0 nova_compute[253512]: 2025-11-25 09:59:04.603 253516 DEBUG nova.network.neutron [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Successfully updated port: 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:59:04 compute-0 nova_compute[253512]: 2025-11-25 09:59:04.621 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-f5a6cffa-7adc-4794-942d-377379b2d807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:59:04 compute-0 nova_compute[253512]: 2025-11-25 09:59:04.621 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-f5a6cffa-7adc-4794-942d-377379b2d807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:59:04 compute-0 nova_compute[253512]: 2025-11-25 09:59:04.621 253516 DEBUG nova.network.neutron [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:59:04 compute-0 nova_compute[253512]: 2025-11-25 09:59:04.699 253516 DEBUG nova.compute.manager [req-a78553e3-20b0-48d7-b277-09ce09f43744 req-89c846c2-cf80-485d-81e6-d8c0b61d7d1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Received event network-changed-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:59:04 compute-0 nova_compute[253512]: 2025-11-25 09:59:04.699 253516 DEBUG nova.compute.manager [req-a78553e3-20b0-48d7-b277-09ce09f43744 req-89c846c2-cf80-485d-81e6-d8c0b61d7d1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Refreshing instance network info cache due to event network-changed-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:59:04 compute-0 nova_compute[253512]: 2025-11-25 09:59:04.699 253516 DEBUG oslo_concurrency.lockutils [req-a78553e3-20b0-48d7-b277-09ce09f43744 req-89c846c2-cf80-485d-81e6-d8c0b61d7d1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-f5a6cffa-7adc-4794-942d-377379b2d807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:59:04 compute-0 nova_compute[253512]: 2025-11-25 09:59:04.752 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:04 compute-0 nova_compute[253512]: 2025-11-25 09:59:04.765 253516 DEBUG nova.network.neutron [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:59:04 compute-0 ceph-mon[74207]: pgmap v835: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 80 op/s
Nov 25 09:59:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:05.387 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:05.387 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:05.387 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v836: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 80 op/s
Nov 25 09:59:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:06.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.360 253516 DEBUG nova.network.neutron [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Updating instance_info_cache with network_info: [{"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.380 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-f5a6cffa-7adc-4794-942d-377379b2d807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.380 253516 DEBUG nova.compute.manager [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Instance network_info: |[{"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.380 253516 DEBUG oslo_concurrency.lockutils [req-a78553e3-20b0-48d7-b277-09ce09f43744 req-89c846c2-cf80-485d-81e6-d8c0b61d7d1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-f5a6cffa-7adc-4794-942d-377379b2d807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.380 253516 DEBUG nova.network.neutron [req-a78553e3-20b0-48d7-b277-09ce09f43744 req-89c846c2-cf80-485d-81e6-d8c0b61d7d1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Refreshing network info cache for port 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.382 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Start _get_guest_xml network_info=[{"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'encrypted': False, 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.385 253516 WARNING nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.389 253516 DEBUG nova.virt.libvirt.host [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.389 253516 DEBUG nova.virt.libvirt.host [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.393 253516 DEBUG nova.virt.libvirt.host [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.393 253516 DEBUG nova.virt.libvirt.host [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.393 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.393 253516 DEBUG nova.virt.hardware [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.394 253516 DEBUG nova.virt.hardware [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.394 253516 DEBUG nova.virt.hardware [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.394 253516 DEBUG nova.virt.hardware [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.394 253516 DEBUG nova.virt.hardware [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.395 253516 DEBUG nova.virt.hardware [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.395 253516 DEBUG nova.virt.hardware [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.395 253516 DEBUG nova.virt.hardware [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.395 253516 DEBUG nova.virt.hardware [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.395 253516 DEBUG nova.virt.hardware [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.396 253516 DEBUG nova.virt.hardware [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.397 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:06.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:59:06 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197358068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.748 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.765 253516 DEBUG nova.storage.rbd_utils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f5a6cffa-7adc-4794-942d-377379b2d807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:06 compute-0 nova_compute[253512]: 2025-11-25 09:59:06.768 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:06 compute-0 ceph-mon[74207]: pgmap v836: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 80 op/s
Nov 25 09:59:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3197358068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:59:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:07.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:07.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:07.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:07.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:59:07 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630008766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.109 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.110 253516 DEBUG nova.virt.libvirt.vif [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-810107946',display_name='tempest-TestNetworkBasicOps-server-810107946',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-810107946',id=9,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMzCz8+iifzW0jcx1FkJxAPdGR3sUwRwqBaojqng97U6/yBZZtSZBMEQxUE8DySlx6rXxAfZvUh7cmKV/eDVssoF4inwGbT9uoQKqal5q5Gm+AH+DrYYxr58jHy2TCW8/g==',key_name='tempest-TestNetworkBasicOps-506834367',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-9d23erbi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:59:02Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=f5a6cffa-7adc-4794-942d-377379b2d807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.110 253516 DEBUG nova.network.os_vif_util [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.111 253516 DEBUG nova.network.os_vif_util [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.112 253516 DEBUG nova.objects.instance [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid f5a6cffa-7adc-4794-942d-377379b2d807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.142 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:59:07 compute-0 nova_compute[253512]:   <uuid>f5a6cffa-7adc-4794-942d-377379b2d807</uuid>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   <name>instance-00000009</name>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   <memory>131072</memory>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   <vcpu>1</vcpu>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   <metadata>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <nova:name>tempest-TestNetworkBasicOps-server-810107946</nova:name>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <nova:creationTime>2025-11-25 09:59:06</nova:creationTime>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <nova:flavor name="m1.nano">
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <nova:memory>128</nova:memory>
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <nova:disk>1</nova:disk>
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <nova:swap>0</nova:swap>
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       </nova:flavor>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <nova:owner>
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       </nova:owner>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <nova:ports>
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <nova:port uuid="9c65e9ae-66c9-44ad-8fb1-f07f28d9b619">
Nov 25 09:59:07 compute-0 nova_compute[253512]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:         </nova:port>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       </nova:ports>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     </nova:instance>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   </metadata>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   <sysinfo type="smbios">
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <system>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <entry name="serial">f5a6cffa-7adc-4794-942d-377379b2d807</entry>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <entry name="uuid">f5a6cffa-7adc-4794-942d-377379b2d807</entry>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     </system>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   </sysinfo>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   <os>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <boot dev="hd"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <smbios mode="sysinfo"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   </os>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   <features>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <acpi/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <apic/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <vmcoreinfo/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   </features>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   <clock offset="utc">
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <timer name="hpet" present="no"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   </clock>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   <cpu mode="host-model" match="exact">
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   </cpu>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   <devices>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <disk type="network" device="disk">
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <driver type="raw" cache="none"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <source protocol="rbd" name="vms/f5a6cffa-7adc-4794-942d-377379b2d807_disk">
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       </source>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <auth username="openstack">
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <target dev="vda" bus="virtio"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <disk type="network" device="cdrom">
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <driver type="raw" cache="none"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <source protocol="rbd" name="vms/f5a6cffa-7adc-4794-942d-377379b2d807_disk.config">
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       </source>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <auth username="openstack">
Nov 25 09:59:07 compute-0 nova_compute[253512]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <target dev="sda" bus="sata"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <interface type="ethernet">
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <mac address="fa:16:3e:dd:53:a1"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <model type="virtio"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <mtu size="1442"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <target dev="tap9c65e9ae-66"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     </interface>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <serial type="pty">
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <log file="/var/lib/nova/instances/f5a6cffa-7adc-4794-942d-377379b2d807/console.log" append="off"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     </serial>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <video>
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <model type="virtio"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     </video>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <input type="tablet" bus="usb"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <rng model="virtio">
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     </rng>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <controller type="usb" index="0"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     <memballoon model="virtio">
Nov 25 09:59:07 compute-0 nova_compute[253512]:       <stats period="10"/>
Nov 25 09:59:07 compute-0 nova_compute[253512]:     </memballoon>
Nov 25 09:59:07 compute-0 nova_compute[253512]:   </devices>
Nov 25 09:59:07 compute-0 nova_compute[253512]: </domain>
Nov 25 09:59:07 compute-0 nova_compute[253512]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.143 253516 DEBUG nova.compute.manager [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Preparing to wait for external event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.143 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.144 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.144 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.144 253516 DEBUG nova.virt.libvirt.vif [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-810107946',display_name='tempest-TestNetworkBasicOps-server-810107946',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-810107946',id=9,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMzCz8+iifzW0jcx1FkJxAPdGR3sUwRwqBaojqng97U6/yBZZtSZBMEQxUE8DySlx6rXxAfZvUh7cmKV/eDVssoF4inwGbT9uoQKqal5q5Gm+AH+DrYYxr58jHy2TCW8/g==',key_name='tempest-TestNetworkBasicOps-506834367',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-9d23erbi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:59:02Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=f5a6cffa-7adc-4794-942d-377379b2d807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.144 253516 DEBUG nova.network.os_vif_util [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.145 253516 DEBUG nova.network.os_vif_util [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.145 253516 DEBUG os_vif [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.146 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.146 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.146 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.148 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.148 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c65e9ae-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.149 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c65e9ae-66, col_values=(('external_ids', {'iface-id': '9c65e9ae-66c9-44ad-8fb1-f07f28d9b619', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:53:a1', 'vm-uuid': 'f5a6cffa-7adc-4794-942d-377379b2d807'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:07 compute-0 NetworkManager[48903]: <info>  [1764064747.1507] manager: (tap9c65e9ae-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.154 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.156 253516 INFO os_vif [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66')
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.188 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.188 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.188 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:dd:53:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.189 253516 INFO nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Using config drive
Nov 25 09:59:07 compute-0 nova_compute[253512]: 2025-11-25 09:59:07.205 253516 DEBUG nova.storage.rbd_utils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f5a6cffa-7adc-4794-942d-377379b2d807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v837: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 25 09:59:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [WARNING] 328/095907 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 09:59:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-0-lycwwd[102341]: [ALERT] 328/095907 (4) : backend 'backend' has no server available!
Nov 25 09:59:07 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1630008766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:59:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:08.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:08.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.518 253516 INFO nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Creating config drive at /var/lib/nova/instances/f5a6cffa-7adc-4794-942d-377379b2d807/disk.config
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.523 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5a6cffa-7adc-4794-942d-377379b2d807/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd6eoapjg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.549 253516 DEBUG nova.network.neutron [req-a78553e3-20b0-48d7-b277-09ce09f43744 req-89c846c2-cf80-485d-81e6-d8c0b61d7d1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Updated VIF entry in instance network info cache for port 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.549 253516 DEBUG nova.network.neutron [req-a78553e3-20b0-48d7-b277-09ce09f43744 req-89c846c2-cf80-485d-81e6-d8c0b61d7d1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Updating instance_info_cache with network_info: [{"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.563 253516 DEBUG oslo_concurrency.lockutils [req-a78553e3-20b0-48d7-b277-09ce09f43744 req-89c846c2-cf80-485d-81e6-d8c0b61d7d1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-f5a6cffa-7adc-4794-942d-377379b2d807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.639 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5a6cffa-7adc-4794-942d-377379b2d807/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd6eoapjg" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.657 253516 DEBUG nova.storage.rbd_utils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f5a6cffa-7adc-4794-942d-377379b2d807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.658 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5a6cffa-7adc-4794-942d-377379b2d807/disk.config f5a6cffa-7adc-4794-942d-377379b2d807_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.736 253516 DEBUG oslo_concurrency.processutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5a6cffa-7adc-4794-942d-377379b2d807/disk.config f5a6cffa-7adc-4794-942d-377379b2d807_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.737 253516 INFO nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Deleting local config drive /var/lib/nova/instances/f5a6cffa-7adc-4794-942d-377379b2d807/disk.config because it was imported into RBD.
Nov 25 09:59:08 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 25 09:59:08 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 25 09:59:08 compute-0 kernel: tap9c65e9ae-66: entered promiscuous mode
Nov 25 09:59:08 compute-0 ovn_controller[155020]: 2025-11-25T09:59:08Z|00079|binding|INFO|Claiming lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 for this chassis.
Nov 25 09:59:08 compute-0 ovn_controller[155020]: 2025-11-25T09:59:08Z|00080|binding|INFO|9c65e9ae-66c9-44ad-8fb1-f07f28d9b619: Claiming fa:16:3e:dd:53:a1 10.100.0.9
Nov 25 09:59:08 compute-0 NetworkManager[48903]: <info>  [1764064748.8030] manager: (tap9c65e9ae-66): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.802 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.805 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:08 compute-0 NetworkManager[48903]: <info>  [1764064748.8110] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 25 09:59:08 compute-0 NetworkManager[48903]: <info>  [1764064748.8114] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.809 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:08 compute-0 podman[266962]: 2025-11-25 09:59:08.811084681 +0000 UTC m=+0.047931052 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.813 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:53:a1 10.100.0.9'], port_security=['fa:16:3e:dd:53:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-558139589', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f5a6cffa-7adc-4794-942d-377379b2d807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-558139589', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '77421187-f24b-4366-8c59-8fbcf4a8390c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4110b518-ed62-4127-a552-a8ff9779dc23, chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.814 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 in datapath b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f bound to our chassis
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.815 164791 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f
Nov 25 09:59:08 compute-0 ceph-mon[74207]: pgmap v837: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.822 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[af40281e-2b7a-4a0f-81cb-c67789fafb5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.823 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1cfdfd5-81 in ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.825 258952 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1cfdfd5-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.825 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d70bdf-ba6e-4e96-a081-28303ca4e9ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.826 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[4af65dae-14d6-42e7-81e7-aa4ada0642e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.834 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[1acc8554-2a04-414e-9430-440e42fc648b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 systemd-machined[216497]: New machine qemu-4-instance-00000009.
Nov 25 09:59:08 compute-0 systemd-udevd[267014]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:59:08 compute-0 NetworkManager[48903]: <info>  [1764064748.8516] device (tap9c65e9ae-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:59:08 compute-0 NetworkManager[48903]: <info>  [1764064748.8524] device (tap9c65e9ae-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:59:08 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000009.
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.864 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[1848f8d6-00c8-4f99-99b1-d0d36f2d8fb8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.884 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[48bbd998-5bee-44ab-8906-310bdefcbcc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.889 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[a1963d64-a4c4-4493-8e62-feb5e2d7d89f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 systemd-udevd[267016]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:59:08 compute-0 NetworkManager[48903]: <info>  [1764064748.8906] manager: (tapb1cfdfd5-80): new Veth device (/org/freedesktop/NetworkManager/Devices/52)
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.913 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[dce0daf5-a4c1-444c-8b13-122d4fbebffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.915 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[9bdc6e27-f30d-4203-83d7-3df1f56b551b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.927 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.929 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.936 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:08 compute-0 NetworkManager[48903]: <info>  [1764064748.9401] device (tapb1cfdfd5-80): carrier: link connected
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.941 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[956aef61-b7b6-4d33-9223-9978afbf055a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 ovn_controller[155020]: 2025-11-25T09:59:08Z|00081|binding|INFO|Setting lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 ovn-installed in OVS
Nov 25 09:59:08 compute-0 ovn_controller[155020]: 2025-11-25T09:59:08Z|00082|binding|INFO|Setting lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 up in Southbound
Nov 25 09:59:08 compute-0 nova_compute[253512]: 2025-11-25 09:59:08.951 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.952 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[5a1499c1-749a-40fe-960c-92c31ba32e35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1cfdfd5-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:b0:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 352951, 'reachable_time': 44123, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267037, 'error': None, 'target': 'ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.961 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[22cee2ba-dfc8-4c8b-8160-5dfeb8e80030]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:b0c7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 352951, 'tstamp': 352951}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267038, 'error': None, 'target': 'ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.971 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[0558da79-3e62-48f8-b1c0-64626acf2d44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1cfdfd5-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:b0:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 352951, 'reachable_time': 44123, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267039, 'error': None, 'target': 'ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:08 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:08.987 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[382113b3-b786-4db5-ac1b-d9755be4633f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:09.018 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d3d0ae-ccce-4dc5-9a05-a4288d123891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:09 compute-0 kernel: tapb1cfdfd5-80: entered promiscuous mode
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:09.019 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1cfdfd5-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:09.019 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:09.019 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1cfdfd5-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:09 compute-0 nova_compute[253512]: 2025-11-25 09:59:09.020 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:09 compute-0 nova_compute[253512]: 2025-11-25 09:59:09.022 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:09 compute-0 NetworkManager[48903]: <info>  [1764064749.0228] manager: (tapb1cfdfd5-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:09.025 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1cfdfd5-80, col_values=(('external_ids', {'iface-id': '296dedf0-24b8-4ce5-952e-492b27ffb1cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:09 compute-0 nova_compute[253512]: 2025-11-25 09:59:09.025 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:09 compute-0 ovn_controller[155020]: 2025-11-25T09:59:09Z|00083|binding|INFO|Releasing lport 296dedf0-24b8-4ce5-952e-492b27ffb1cd from this chassis (sb_readonly=0)
Nov 25 09:59:09 compute-0 nova_compute[253512]: 2025-11-25 09:59:09.026 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:09.027 164791 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:09.028 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[828d4180-a23d-48cd-b95c-5a5788dd2586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:09.028 164791 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: global
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     log         /dev/log local0 debug
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     log-tag     haproxy-metadata-proxy-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     user        root
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     group       root
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     maxconn     1024
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     pidfile     /var/lib/neutron/external/pids/b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f.pid.haproxy
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     daemon
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: defaults
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     log global
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     mode http
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     option httplog
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     option dontlognull
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     option http-server-close
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     option forwardfor
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     retries                 3
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     timeout http-request    30s
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     timeout connect         30s
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     timeout client          32s
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     timeout server          32s
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     timeout http-keep-alive 30s
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: listen listener
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     bind 169.254.169.254:80
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:     http-request add-header X-OVN-Network-ID b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:59:09 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:09.029 164791 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'env', 'PROCESS_TAG=haproxy-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:59:09 compute-0 nova_compute[253512]: 2025-11-25 09:59:09.040 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:09 compute-0 nova_compute[253512]: 2025-11-25 09:59:09.158 253516 DEBUG nova.compute.manager [req-156c22a9-897b-4c31-ab4c-7dd27938b852 req-4bede77a-6316-47ff-adcf-93ef0f958fc9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Received event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:59:09 compute-0 nova_compute[253512]: 2025-11-25 09:59:09.158 253516 DEBUG oslo_concurrency.lockutils [req-156c22a9-897b-4c31-ab4c-7dd27938b852 req-4bede77a-6316-47ff-adcf-93ef0f958fc9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:09 compute-0 nova_compute[253512]: 2025-11-25 09:59:09.158 253516 DEBUG oslo_concurrency.lockutils [req-156c22a9-897b-4c31-ab4c-7dd27938b852 req-4bede77a-6316-47ff-adcf-93ef0f958fc9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:09 compute-0 nova_compute[253512]: 2025-11-25 09:59:09.159 253516 DEBUG oslo_concurrency.lockutils [req-156c22a9-897b-4c31-ab4c-7dd27938b852 req-4bede77a-6316-47ff-adcf-93ef0f958fc9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:09 compute-0 nova_compute[253512]: 2025-11-25 09:59:09.159 253516 DEBUG nova.compute.manager [req-156c22a9-897b-4c31-ab4c-7dd27938b852 req-4bede77a-6316-47ff-adcf-93ef0f958fc9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Processing event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:59:09 compute-0 podman[267069]: 2025-11-25 09:59:09.3036258 +0000 UTC m=+0.030798695 container create 6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 09:59:09 compute-0 systemd[1]: Started libpod-conmon-6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a.scope.
Nov 25 09:59:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758d35ef62daf0f9ca3822869d62d53a72c148a1f894c1cc73e40ecd779c9cc6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:09 compute-0 podman[267069]: 2025-11-25 09:59:09.367687914 +0000 UTC m=+0.094860827 container init 6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 09:59:09 compute-0 podman[267069]: 2025-11-25 09:59:09.37734543 +0000 UTC m=+0.104518324 container start 6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 09:59:09 compute-0 podman[267069]: 2025-11-25 09:59:09.290074904 +0000 UTC m=+0.017247818 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:59:09 compute-0 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[267082]: [NOTICE]   (267086) : New worker (267088) forked
Nov 25 09:59:09 compute-0 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[267082]: [NOTICE]   (267086) : Loading success.
Nov 25 09:59:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v838: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:59:09 compute-0 nova_compute[253512]: 2025-11-25 09:59:09.754 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.130 253516 DEBUG nova.compute.manager [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.131 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064750.130594, f5a6cffa-7adc-4794-942d-377379b2d807 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.131 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] VM Started (Lifecycle Event)
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.133 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.135 253516 INFO nova.virt.libvirt.driver [-] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Instance spawned successfully.
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.135 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.150 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.152 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.153 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.153 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.154 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.154 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.154 253516 DEBUG nova.virt.libvirt.driver [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.158 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.175 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.175 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064750.1306596, f5a6cffa-7adc-4794-942d-377379b2d807 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.176 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] VM Paused (Lifecycle Event)
Nov 25 09:59:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:10.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.203 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.205 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064750.1329763, f5a6cffa-7adc-4794-942d-377379b2d807 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.205 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] VM Resumed (Lifecycle Event)
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.218 253516 INFO nova.compute.manager [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Took 7.25 seconds to spawn the instance on the hypervisor.
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.218 253516 DEBUG nova.compute.manager [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.219 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.228 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:59:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:59:10] "GET /metrics HTTP/1.1" 200 48447 "" "Prometheus/2.51.0"
Nov 25 09:59:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:59:10] "GET /metrics HTTP/1.1" 200 48447 "" "Prometheus/2.51.0"
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.248 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.266 253516 INFO nova.compute.manager [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Took 7.98 seconds to build instance.
Nov 25 09:59:10 compute-0 nova_compute[253512]: 2025-11-25 09:59:10.281 253516 DEBUG oslo_concurrency.lockutils [None req-003eeb46-4d77-4cac-9306-ca36429704a0 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:10.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:10 compute-0 ceph-mon[74207]: pgmap v838: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:59:11 compute-0 nova_compute[253512]: 2025-11-25 09:59:11.220 253516 DEBUG nova.compute.manager [req-9326b8bb-2277-4ab8-bd20-d32b69b28da7 req-17fba0ad-3a5f-4d58-9982-348cd93c6781 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Received event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:59:11 compute-0 nova_compute[253512]: 2025-11-25 09:59:11.221 253516 DEBUG oslo_concurrency.lockutils [req-9326b8bb-2277-4ab8-bd20-d32b69b28da7 req-17fba0ad-3a5f-4d58-9982-348cd93c6781 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:11 compute-0 nova_compute[253512]: 2025-11-25 09:59:11.221 253516 DEBUG oslo_concurrency.lockutils [req-9326b8bb-2277-4ab8-bd20-d32b69b28da7 req-17fba0ad-3a5f-4d58-9982-348cd93c6781 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:11 compute-0 nova_compute[253512]: 2025-11-25 09:59:11.221 253516 DEBUG oslo_concurrency.lockutils [req-9326b8bb-2277-4ab8-bd20-d32b69b28da7 req-17fba0ad-3a5f-4d58-9982-348cd93c6781 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:11 compute-0 nova_compute[253512]: 2025-11-25 09:59:11.222 253516 DEBUG nova.compute.manager [req-9326b8bb-2277-4ab8-bd20-d32b69b28da7 req-17fba0ad-3a5f-4d58-9982-348cd93c6781 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] No waiting events found dispatching network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:59:11 compute-0 nova_compute[253512]: 2025-11-25 09:59:11.222 253516 WARNING nova.compute.manager [req-9326b8bb-2277-4ab8-bd20-d32b69b28da7 req-17fba0ad-3a5f-4d58-9982-348cd93c6781 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Received unexpected event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 for instance with vm_state active and task_state None.
Nov 25 09:59:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v839: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Nov 25 09:59:12 compute-0 nova_compute[253512]: 2025-11-25 09:59:12.151 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:12.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:12.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:12 compute-0 ceph-mon[74207]: pgmap v839: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Nov 25 09:59:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v840: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.622 253516 DEBUG oslo_concurrency.lockutils [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "f5a6cffa-7adc-4794-942d-377379b2d807" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.623 253516 DEBUG oslo_concurrency.lockutils [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.623 253516 DEBUG oslo_concurrency.lockutils [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.624 253516 DEBUG oslo_concurrency.lockutils [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.624 253516 DEBUG oslo_concurrency.lockutils [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.625 253516 INFO nova.compute.manager [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Terminating instance
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.625 253516 DEBUG nova.compute.manager [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 09:59:13 compute-0 kernel: tap9c65e9ae-66 (unregistering): left promiscuous mode
Nov 25 09:59:13 compute-0 NetworkManager[48903]: <info>  [1764064753.6495] device (tap9c65e9ae-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 09:59:13 compute-0 ovn_controller[155020]: 2025-11-25T09:59:13Z|00084|binding|INFO|Releasing lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 from this chassis (sb_readonly=0)
Nov 25 09:59:13 compute-0 ovn_controller[155020]: 2025-11-25T09:59:13Z|00085|binding|INFO|Setting lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 down in Southbound
Nov 25 09:59:13 compute-0 ovn_controller[155020]: 2025-11-25T09:59:13Z|00086|binding|INFO|Removing iface tap9c65e9ae-66 ovn-installed in OVS
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.655 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.656 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.660 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:53:a1 10.100.0.9'], port_security=['fa:16:3e:dd:53:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-558139589', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f5a6cffa-7adc-4794-942d-377379b2d807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-558139589', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '9', 'neutron:security_group_ids': '77421187-f24b-4366-8c59-8fbcf4a8390c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4110b518-ed62-4127-a552-a8ff9779dc23, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.661 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 in datapath b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f unbound from our chassis
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.661 164791 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.662 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[402f7903-fa6d-44c1-b66a-a67e1b5daff3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.663 164791 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f namespace which is not needed anymore
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.676 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:13 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 25 09:59:13 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Consumed 4.831s CPU time.
Nov 25 09:59:13 compute-0 systemd-machined[216497]: Machine qemu-4-instance-00000009 terminated.
Nov 25 09:59:13 compute-0 sudo[267161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:59:13 compute-0 sudo[267161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:13 compute-0 sudo[267161]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:13 compute-0 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[267082]: [NOTICE]   (267086) : haproxy version is 2.8.14-c23fe91
Nov 25 09:59:13 compute-0 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[267082]: [NOTICE]   (267086) : path to executable is /usr/sbin/haproxy
Nov 25 09:59:13 compute-0 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[267082]: [WARNING]  (267086) : Exiting Master process...
Nov 25 09:59:13 compute-0 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[267082]: [WARNING]  (267086) : Exiting Master process...
Nov 25 09:59:13 compute-0 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[267082]: [ALERT]    (267086) : Current worker (267088) exited with code 143 (Terminated)
Nov 25 09:59:13 compute-0 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[267082]: [WARNING]  (267086) : All workers exited. Exiting... (0)
Nov 25 09:59:13 compute-0 systemd[1]: libpod-6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a.scope: Deactivated successfully.
Nov 25 09:59:13 compute-0 conmon[267082]: conmon 6f1bc2941193013df295 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a.scope/container/memory.events
Nov 25 09:59:13 compute-0 podman[267167]: 2025-11-25 09:59:13.780482179 +0000 UTC m=+0.048500506 container died 6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 09:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a-userdata-shm.mount: Deactivated successfully.
Nov 25 09:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-758d35ef62daf0f9ca3822869d62d53a72c148a1f894c1cc73e40ecd779c9cc6-merged.mount: Deactivated successfully.
Nov 25 09:59:13 compute-0 podman[267167]: 2025-11-25 09:59:13.803616623 +0000 UTC m=+0.071634950 container cleanup 6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 09:59:13 compute-0 sudo[267199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 09:59:13 compute-0 sudo[267199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:13 compute-0 systemd[1]: libpod-conmon-6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a.scope: Deactivated successfully.
Nov 25 09:59:13 compute-0 kernel: tap9c65e9ae-66: entered promiscuous mode
Nov 25 09:59:13 compute-0 NetworkManager[48903]: <info>  [1764064753.8378] manager: (tap9c65e9ae-66): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.839 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:13 compute-0 kernel: tap9c65e9ae-66 (unregistering): left promiscuous mode
Nov 25 09:59:13 compute-0 ovn_controller[155020]: 2025-11-25T09:59:13Z|00087|binding|INFO|Claiming lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 for this chassis.
Nov 25 09:59:13 compute-0 ovn_controller[155020]: 2025-11-25T09:59:13Z|00088|binding|INFO|9c65e9ae-66c9-44ad-8fb1-f07f28d9b619: Claiming fa:16:3e:dd:53:a1 10.100.0.9
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.851 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:53:a1 10.100.0.9'], port_security=['fa:16:3e:dd:53:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-558139589', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f5a6cffa-7adc-4794-942d-377379b2d807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-558139589', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '9', 'neutron:security_group_ids': '77421187-f24b-4366-8c59-8fbcf4a8390c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4110b518-ed62-4127-a552-a8ff9779dc23, chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.863 253516 INFO nova.virt.libvirt.driver [-] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Instance destroyed successfully.
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.863 253516 DEBUG nova.objects.instance [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid f5a6cffa-7adc-4794-942d-377379b2d807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:59:13 compute-0 ovn_controller[155020]: 2025-11-25T09:59:13Z|00089|binding|INFO|Setting lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 ovn-installed in OVS
Nov 25 09:59:13 compute-0 ovn_controller[155020]: 2025-11-25T09:59:13Z|00090|binding|INFO|Setting lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 up in Southbound
Nov 25 09:59:13 compute-0 ovn_controller[155020]: 2025-11-25T09:59:13Z|00091|binding|INFO|Releasing lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 from this chassis (sb_readonly=1)
Nov 25 09:59:13 compute-0 ovn_controller[155020]: 2025-11-25T09:59:13Z|00092|if_status|INFO|Not setting lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 down as sb is readonly
Nov 25 09:59:13 compute-0 ovn_controller[155020]: 2025-11-25T09:59:13Z|00093|binding|INFO|Releasing lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 from this chassis (sb_readonly=0)
Nov 25 09:59:13 compute-0 ovn_controller[155020]: 2025-11-25T09:59:13Z|00094|binding|INFO|Setting lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 down in Southbound
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.872 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:13 compute-0 ovn_controller[155020]: 2025-11-25T09:59:13Z|00095|binding|INFO|Removing iface tap9c65e9ae-66 ovn-installed in OVS
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.875 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:53:a1 10.100.0.9'], port_security=['fa:16:3e:dd:53:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-558139589', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f5a6cffa-7adc-4794-942d-377379b2d807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-558139589', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '9', 'neutron:security_group_ids': '77421187-f24b-4366-8c59-8fbcf4a8390c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4110b518-ed62-4127-a552-a8ff9779dc23, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.875 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.879 253516 DEBUG nova.virt.libvirt.vif [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-810107946',display_name='tempest-TestNetworkBasicOps-server-810107946',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-810107946',id=9,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMzCz8+iifzW0jcx1FkJxAPdGR3sUwRwqBaojqng97U6/yBZZtSZBMEQxUE8DySlx6rXxAfZvUh7cmKV/eDVssoF4inwGbT9uoQKqal5q5Gm+AH+DrYYxr58jHy2TCW8/g==',key_name='tempest-TestNetworkBasicOps-506834367',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:59:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-9d23erbi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:59:10Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=f5a6cffa-7adc-4794-942d-377379b2d807,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.879 253516 DEBUG nova.network.os_vif_util [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.880 253516 DEBUG nova.network.os_vif_util [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.881 253516 DEBUG os_vif [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.882 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.882 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c65e9ae-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.883 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.885 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:59:13 compute-0 podman[267235]: 2025-11-25 09:59:13.88459096 +0000 UTC m=+0.064541687 container remove 6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.889 253516 INFO os_vif [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66')
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.890 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6fc6c0-aae5-437a-b3c7-ba1118bffba6]: (4, ('Tue Nov 25 09:59:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f (6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a)\n6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a\nTue Nov 25 09:59:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f (6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a)\n6f1bc2941193013df2956ec21c0c94813666a7bdd2f1bd06265f794ac98f3e4a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.892 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[97b1c785-ff3b-4003-88a1-99ba0920ca9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:13 compute-0 kernel: tapb1cfdfd5-80: left promiscuous mode
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.893 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1cfdfd5-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.899 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a2b18b-351a-4033-96aa-465a53e51aa1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.914 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc4c8dc-1adc-47e1-994e-656e29cdd290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.915 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[575c708f-a127-48aa-83f6-1f659530570d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:13 compute-0 nova_compute[253512]: 2025-11-25 09:59:13.923 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.928 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd4f440-5b4a-4bb6-89fb-779dc24fface]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 352945, 'reachable_time': 41693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267274, 'error': None, 'target': 'ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:13 compute-0 systemd[1]: run-netns-ovnmeta\x2db1cfdfd5\x2d8c3e\x2d495c\x2da4e8\x2d9aea8f1d5f9f.mount: Deactivated successfully.
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.934 164901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.934 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a5f258-48dd-4bc1-ba05-34268ac13aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.935 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 in datapath b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f unbound from our chassis
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.935 164791 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.936 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[78d69870-76dc-4fc4-b1bb-dc6a6ff34c8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.937 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 in datapath b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f unbound from our chassis
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.938 164791 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 09:59:13 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:13.939 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb0410c-e568-4b8c-85ef-6b5cc9b8c500]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.094 253516 INFO nova.virt.libvirt.driver [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Deleting instance files /var/lib/nova/instances/f5a6cffa-7adc-4794-942d-377379b2d807_del
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.095 253516 INFO nova.virt.libvirt.driver [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Deletion of /var/lib/nova/instances/f5a6cffa-7adc-4794-942d-377379b2d807_del complete
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.138 253516 INFO nova.compute.manager [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Took 0.51 seconds to destroy the instance on the hypervisor.
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.139 253516 DEBUG oslo.service.loopingcall [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.139 253516 DEBUG nova.compute.manager [-] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.140 253516 DEBUG nova.network.neutron [-] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 09:59:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:14.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:14 compute-0 sudo[267199]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:59:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:59:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 09:59:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:59:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 09:59:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:59:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 09:59:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:59:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 09:59:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:59:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 09:59:14 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:59:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 09:59:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:59:14 compute-0 sudo[267308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:59:14 compute-0 sudo[267308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:14 compute-0 sudo[267308]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:14 compute-0 sudo[267333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 09:59:14 compute-0 sudo[267333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.423 253516 DEBUG nova.compute.manager [req-2592add5-7bdb-4f39-9213-9e90a8b9a565 req-96532e3a-fd5f-4c95-ad45-359d31eb05f0 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Received event network-vif-unplugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.423 253516 DEBUG oslo_concurrency.lockutils [req-2592add5-7bdb-4f39-9213-9e90a8b9a565 req-96532e3a-fd5f-4c95-ad45-359d31eb05f0 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.424 253516 DEBUG oslo_concurrency.lockutils [req-2592add5-7bdb-4f39-9213-9e90a8b9a565 req-96532e3a-fd5f-4c95-ad45-359d31eb05f0 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.424 253516 DEBUG oslo_concurrency.lockutils [req-2592add5-7bdb-4f39-9213-9e90a8b9a565 req-96532e3a-fd5f-4c95-ad45-359d31eb05f0 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.424 253516 DEBUG nova.compute.manager [req-2592add5-7bdb-4f39-9213-9e90a8b9a565 req-96532e3a-fd5f-4c95-ad45-359d31eb05f0 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] No waiting events found dispatching network-vif-unplugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.424 253516 DEBUG nova.compute.manager [req-2592add5-7bdb-4f39-9213-9e90a8b9a565 req-96532e3a-fd5f-4c95-ad45-359d31eb05f0 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Received event network-vif-unplugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 09:59:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:14.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:14 compute-0 podman[267388]: 2025-11-25 09:59:14.717855364 +0000 UTC m=+0.030074007 container create 6a3f6584460026f60e04d11c12d5d55afc2cc75ca4063d6c17420644411bc639 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_boyd, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:59:14 compute-0 systemd[1]: Started libpod-conmon-6a3f6584460026f60e04d11c12d5d55afc2cc75ca4063d6c17420644411bc639.scope.
Nov 25 09:59:14 compute-0 nova_compute[253512]: 2025-11-25 09:59:14.754 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:59:14 compute-0 podman[267388]: 2025-11-25 09:59:14.772874929 +0000 UTC m=+0.085093563 container init 6a3f6584460026f60e04d11c12d5d55afc2cc75ca4063d6c17420644411bc639 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:59:14 compute-0 podman[267388]: 2025-11-25 09:59:14.777412063 +0000 UTC m=+0.089630696 container start 6a3f6584460026f60e04d11c12d5d55afc2cc75ca4063d6c17420644411bc639 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_boyd, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:59:14 compute-0 podman[267388]: 2025-11-25 09:59:14.778920788 +0000 UTC m=+0.091139421 container attach 6a3f6584460026f60e04d11c12d5d55afc2cc75ca4063d6c17420644411bc639 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 09:59:14 compute-0 distracted_boyd[267401]: 167 167
Nov 25 09:59:14 compute-0 systemd[1]: libpod-6a3f6584460026f60e04d11c12d5d55afc2cc75ca4063d6c17420644411bc639.scope: Deactivated successfully.
Nov 25 09:59:14 compute-0 podman[267388]: 2025-11-25 09:59:14.781334277 +0000 UTC m=+0.093552911 container died 6a3f6584460026f60e04d11c12d5d55afc2cc75ca4063d6c17420644411bc639 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_boyd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 09:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdc31b7b89cd0be2bc24c148daf6843ac3ea6e0aa94c17faa9f4b047c1c4aee7-merged.mount: Deactivated successfully.
Nov 25 09:59:14 compute-0 podman[267388]: 2025-11-25 09:59:14.797637263 +0000 UTC m=+0.109855896 container remove 6a3f6584460026f60e04d11c12d5d55afc2cc75ca4063d6c17420644411bc639 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 25 09:59:14 compute-0 podman[267388]: 2025-11-25 09:59:14.706303808 +0000 UTC m=+0.018522440 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:59:14 compute-0 systemd[1]: libpod-conmon-6a3f6584460026f60e04d11c12d5d55afc2cc75ca4063d6c17420644411bc639.scope: Deactivated successfully.
Nov 25 09:59:14 compute-0 ceph-mon[74207]: pgmap v840: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Nov 25 09:59:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:59:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 09:59:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:59:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:59:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 09:59:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 09:59:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 09:59:14 compute-0 podman[267423]: 2025-11-25 09:59:14.918478539 +0000 UTC m=+0.030247035 container create b5562f562f8293aac9e136f1f5fb0030c82bdc674e16105e87a9cc5a9b82d0cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_cerf, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 25 09:59:14 compute-0 systemd[1]: Started libpod-conmon-b5562f562f8293aac9e136f1f5fb0030c82bdc674e16105e87a9cc5a9b82d0cd.scope.
Nov 25 09:59:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e423b6d6d8a4565bb2085202cd104fdd2716a4460f05a7ee2c672e97b5b1e40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e423b6d6d8a4565bb2085202cd104fdd2716a4460f05a7ee2c672e97b5b1e40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e423b6d6d8a4565bb2085202cd104fdd2716a4460f05a7ee2c672e97b5b1e40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e423b6d6d8a4565bb2085202cd104fdd2716a4460f05a7ee2c672e97b5b1e40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e423b6d6d8a4565bb2085202cd104fdd2716a4460f05a7ee2c672e97b5b1e40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:59:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:59:14 compute-0 podman[267423]: 2025-11-25 09:59:14.977621587 +0000 UTC m=+0.089390072 container init b5562f562f8293aac9e136f1f5fb0030c82bdc674e16105e87a9cc5a9b82d0cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:59:14 compute-0 podman[267423]: 2025-11-25 09:59:14.98236007 +0000 UTC m=+0.094128557 container start b5562f562f8293aac9e136f1f5fb0030c82bdc674e16105e87a9cc5a9b82d0cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 09:59:14 compute-0 podman[267423]: 2025-11-25 09:59:14.984753112 +0000 UTC m=+0.096521618 container attach b5562f562f8293aac9e136f1f5fb0030c82bdc674e16105e87a9cc5a9b82d0cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_cerf, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 09:59:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:59:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:59:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:59:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:59:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:59:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:59:15 compute-0 podman[267423]: 2025-11-25 09:59:14.906298736 +0000 UTC m=+0.018067223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:59:15 compute-0 gracious_cerf[267436]: --> passed data devices: 0 physical, 1 LVM
Nov 25 09:59:15 compute-0 gracious_cerf[267436]: --> All data devices are unavailable
Nov 25 09:59:15 compute-0 systemd[1]: libpod-b5562f562f8293aac9e136f1f5fb0030c82bdc674e16105e87a9cc5a9b82d0cd.scope: Deactivated successfully.
Nov 25 09:59:15 compute-0 podman[267423]: 2025-11-25 09:59:15.246270565 +0000 UTC m=+0.358039061 container died b5562f562f8293aac9e136f1f5fb0030c82bdc674e16105e87a9cc5a9b82d0cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_cerf, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 09:59:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e423b6d6d8a4565bb2085202cd104fdd2716a4460f05a7ee2c672e97b5b1e40-merged.mount: Deactivated successfully.
Nov 25 09:59:15 compute-0 podman[267423]: 2025-11-25 09:59:15.266675944 +0000 UTC m=+0.378444431 container remove b5562f562f8293aac9e136f1f5fb0030c82bdc674e16105e87a9cc5a9b82d0cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_cerf, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:59:15 compute-0 systemd[1]: libpod-conmon-b5562f562f8293aac9e136f1f5fb0030c82bdc674e16105e87a9cc5a9b82d0cd.scope: Deactivated successfully.
Nov 25 09:59:15 compute-0 sudo[267333]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:15 compute-0 sudo[267461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:59:15 compute-0 sudo[267461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:15 compute-0 sudo[267461]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:15 compute-0 sudo[267486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 09:59:15 compute-0 sudo[267486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v841: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Nov 25 09:59:15 compute-0 nova_compute[253512]: 2025-11-25 09:59:15.631 253516 DEBUG nova.network.neutron [-] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:59:15 compute-0 nova_compute[253512]: 2025-11-25 09:59:15.641 253516 INFO nova.compute.manager [-] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Took 1.50 seconds to deallocate network for instance.
Nov 25 09:59:15 compute-0 nova_compute[253512]: 2025-11-25 09:59:15.669 253516 DEBUG oslo_concurrency.lockutils [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:15 compute-0 nova_compute[253512]: 2025-11-25 09:59:15.669 253516 DEBUG oslo_concurrency.lockutils [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:15 compute-0 podman[267543]: 2025-11-25 09:59:15.67732103 +0000 UTC m=+0.029561531 container create d5134f8e09bbe2048d40cd3a9d53bd7e26e5074a278949668e30b842dcd3438b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jemison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 09:59:15 compute-0 systemd[1]: Started libpod-conmon-d5134f8e09bbe2048d40cd3a9d53bd7e26e5074a278949668e30b842dcd3438b.scope.
Nov 25 09:59:15 compute-0 nova_compute[253512]: 2025-11-25 09:59:15.720 253516 DEBUG oslo_concurrency.processutils [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:59:15 compute-0 podman[267543]: 2025-11-25 09:59:15.734323383 +0000 UTC m=+0.086563904 container init d5134f8e09bbe2048d40cd3a9d53bd7e26e5074a278949668e30b842dcd3438b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jemison, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:59:15 compute-0 podman[267543]: 2025-11-25 09:59:15.738903037 +0000 UTC m=+0.091143539 container start d5134f8e09bbe2048d40cd3a9d53bd7e26e5074a278949668e30b842dcd3438b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jemison, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 09:59:15 compute-0 podman[267543]: 2025-11-25 09:59:15.740243043 +0000 UTC m=+0.092483544 container attach d5134f8e09bbe2048d40cd3a9d53bd7e26e5074a278949668e30b842dcd3438b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 09:59:15 compute-0 infallible_jemison[267557]: 167 167
Nov 25 09:59:15 compute-0 systemd[1]: libpod-d5134f8e09bbe2048d40cd3a9d53bd7e26e5074a278949668e30b842dcd3438b.scope: Deactivated successfully.
Nov 25 09:59:15 compute-0 podman[267543]: 2025-11-25 09:59:15.742281005 +0000 UTC m=+0.094521507 container died d5134f8e09bbe2048d40cd3a9d53bd7e26e5074a278949668e30b842dcd3438b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 09:59:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac974048145ad654ff722f6b1a4e3323f38c106e104728478a9bba083a2826f6-merged.mount: Deactivated successfully.
Nov 25 09:59:15 compute-0 podman[267543]: 2025-11-25 09:59:15.665027623 +0000 UTC m=+0.017268145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:59:15 compute-0 podman[267543]: 2025-11-25 09:59:15.762752559 +0000 UTC m=+0.114993060 container remove d5134f8e09bbe2048d40cd3a9d53bd7e26e5074a278949668e30b842dcd3438b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jemison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 09:59:15 compute-0 systemd[1]: libpod-conmon-d5134f8e09bbe2048d40cd3a9d53bd7e26e5074a278949668e30b842dcd3438b.scope: Deactivated successfully.
Nov 25 09:59:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:59:15 compute-0 podman[267600]: 2025-11-25 09:59:15.90496798 +0000 UTC m=+0.043054991 container create 1347cdc240256d0ea7ab49598bbed042735270bf08bf019c29f672e6f51d9699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 09:59:15 compute-0 systemd[1]: Started libpod-conmon-1347cdc240256d0ea7ab49598bbed042735270bf08bf019c29f672e6f51d9699.scope.
Nov 25 09:59:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8395a6a6fa11287fd57e63d34999a7716c581d23679daea68d4b1cbb549f380/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8395a6a6fa11287fd57e63d34999a7716c581d23679daea68d4b1cbb549f380/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8395a6a6fa11287fd57e63d34999a7716c581d23679daea68d4b1cbb549f380/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8395a6a6fa11287fd57e63d34999a7716c581d23679daea68d4b1cbb549f380/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:15 compute-0 podman[267600]: 2025-11-25 09:59:15.966575825 +0000 UTC m=+0.104662837 container init 1347cdc240256d0ea7ab49598bbed042735270bf08bf019c29f672e6f51d9699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 25 09:59:15 compute-0 podman[267600]: 2025-11-25 09:59:15.972025188 +0000 UTC m=+0.110112190 container start 1347cdc240256d0ea7ab49598bbed042735270bf08bf019c29f672e6f51d9699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_perlman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:59:15 compute-0 podman[267600]: 2025-11-25 09:59:15.974109889 +0000 UTC m=+0.112196889 container attach 1347cdc240256d0ea7ab49598bbed042735270bf08bf019c29f672e6f51d9699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_perlman, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 25 09:59:15 compute-0 podman[267600]: 2025-11-25 09:59:15.885595027 +0000 UTC m=+0.023682048 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:59:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:59:16 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3465213788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:16 compute-0 nova_compute[253512]: 2025-11-25 09:59:16.089 253516 DEBUG oslo_concurrency.processutils [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:16 compute-0 nova_compute[253512]: 2025-11-25 09:59:16.094 253516 DEBUG nova.compute.provider_tree [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:59:16 compute-0 nova_compute[253512]: 2025-11-25 09:59:16.109 253516 DEBUG nova.scheduler.client.report [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:59:16 compute-0 nova_compute[253512]: 2025-11-25 09:59:16.122 253516 DEBUG oslo_concurrency.lockutils [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:16 compute-0 nova_compute[253512]: 2025-11-25 09:59:16.141 253516 INFO nova.scheduler.client.report [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance f5a6cffa-7adc-4794-942d-377379b2d807
Nov 25 09:59:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:16.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:16 compute-0 nova_compute[253512]: 2025-11-25 09:59:16.187 253516 DEBUG oslo_concurrency.lockutils [None req-8d4212bb-0df2-4c06-b089-ab0db2e9b06d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:16 compute-0 elastic_perlman[267614]: {
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:     "1": [
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:         {
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             "devices": [
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "/dev/loop3"
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             ],
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             "lv_name": "ceph_lv0",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             "lv_size": "21470642176",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             "name": "ceph_lv0",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             "tags": {
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.cluster_name": "ceph",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.crush_device_class": "",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.encrypted": "0",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.osd_id": "1",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.type": "block",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.vdo": "0",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:                 "ceph.with_tpm": "0"
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             },
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             "type": "block",
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:             "vg_name": "ceph_vg0"
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:         }
Nov 25 09:59:16 compute-0 elastic_perlman[267614]:     ]
Nov 25 09:59:16 compute-0 elastic_perlman[267614]: }
Nov 25 09:59:16 compute-0 systemd[1]: libpod-1347cdc240256d0ea7ab49598bbed042735270bf08bf019c29f672e6f51d9699.scope: Deactivated successfully.
Nov 25 09:59:16 compute-0 conmon[267614]: conmon 1347cdc240256d0ea7ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1347cdc240256d0ea7ab49598bbed042735270bf08bf019c29f672e6f51d9699.scope/container/memory.events
Nov 25 09:59:16 compute-0 podman[267600]: 2025-11-25 09:59:16.219368531 +0000 UTC m=+0.357455542 container died 1347cdc240256d0ea7ab49598bbed042735270bf08bf019c29f672e6f51d9699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_perlman, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 09:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8395a6a6fa11287fd57e63d34999a7716c581d23679daea68d4b1cbb549f380-merged.mount: Deactivated successfully.
Nov 25 09:59:16 compute-0 podman[267600]: 2025-11-25 09:59:16.241103817 +0000 UTC m=+0.379190818 container remove 1347cdc240256d0ea7ab49598bbed042735270bf08bf019c29f672e6f51d9699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Nov 25 09:59:16 compute-0 systemd[1]: libpod-conmon-1347cdc240256d0ea7ab49598bbed042735270bf08bf019c29f672e6f51d9699.scope: Deactivated successfully.
Nov 25 09:59:16 compute-0 sudo[267486]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:16 compute-0 sudo[267636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 09:59:16 compute-0 sudo[267636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:16 compute-0 sudo[267636]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:16 compute-0 sudo[267661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 09:59:16 compute-0 sudo[267661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:16.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:16 compute-0 podman[267717]: 2025-11-25 09:59:16.650662244 +0000 UTC m=+0.028023092 container create 61523288ca4242f17f3b8628d3c8790402446c35a0cc6d4508260fd1f5165ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wing, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 09:59:16 compute-0 systemd[1]: Started libpod-conmon-61523288ca4242f17f3b8628d3c8790402446c35a0cc6d4508260fd1f5165ecd.scope.
Nov 25 09:59:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:59:16 compute-0 podman[267717]: 2025-11-25 09:59:16.68774318 +0000 UTC m=+0.065104047 container init 61523288ca4242f17f3b8628d3c8790402446c35a0cc6d4508260fd1f5165ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 09:59:16 compute-0 podman[267717]: 2025-11-25 09:59:16.691634707 +0000 UTC m=+0.068995564 container start 61523288ca4242f17f3b8628d3c8790402446c35a0cc6d4508260fd1f5165ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:59:16 compute-0 podman[267717]: 2025-11-25 09:59:16.693091052 +0000 UTC m=+0.070451920 container attach 61523288ca4242f17f3b8628d3c8790402446c35a0cc6d4508260fd1f5165ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 09:59:16 compute-0 bold_wing[267730]: 167 167
Nov 25 09:59:16 compute-0 systemd[1]: libpod-61523288ca4242f17f3b8628d3c8790402446c35a0cc6d4508260fd1f5165ecd.scope: Deactivated successfully.
Nov 25 09:59:16 compute-0 podman[267717]: 2025-11-25 09:59:16.695698107 +0000 UTC m=+0.073058964 container died 61523288ca4242f17f3b8628d3c8790402446c35a0cc6d4508260fd1f5165ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 09:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff2d475013603f7b3607b0bdbcacbdf6793ea4796c6fce4c4bbeeec017bba8c8-merged.mount: Deactivated successfully.
Nov 25 09:59:16 compute-0 podman[267717]: 2025-11-25 09:59:16.714465428 +0000 UTC m=+0.091826275 container remove 61523288ca4242f17f3b8628d3c8790402446c35a0cc6d4508260fd1f5165ecd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wing, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:59:16 compute-0 podman[267717]: 2025-11-25 09:59:16.638499414 +0000 UTC m=+0.015860281 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:59:16 compute-0 systemd[1]: libpod-conmon-61523288ca4242f17f3b8628d3c8790402446c35a0cc6d4508260fd1f5165ecd.scope: Deactivated successfully.
Nov 25 09:59:16 compute-0 nova_compute[253512]: 2025-11-25 09:59:16.727 253516 DEBUG nova.compute.manager [req-ec004c57-42d3-49b3-b416-55be648f7e07 req-a8c1cd76-23a1-4475-b1de-c3a4a4ae65b1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Received event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:59:16 compute-0 nova_compute[253512]: 2025-11-25 09:59:16.728 253516 DEBUG oslo_concurrency.lockutils [req-ec004c57-42d3-49b3-b416-55be648f7e07 req-a8c1cd76-23a1-4475-b1de-c3a4a4ae65b1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:16 compute-0 nova_compute[253512]: 2025-11-25 09:59:16.728 253516 DEBUG oslo_concurrency.lockutils [req-ec004c57-42d3-49b3-b416-55be648f7e07 req-a8c1cd76-23a1-4475-b1de-c3a4a4ae65b1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:16 compute-0 nova_compute[253512]: 2025-11-25 09:59:16.728 253516 DEBUG oslo_concurrency.lockutils [req-ec004c57-42d3-49b3-b416-55be648f7e07 req-a8c1cd76-23a1-4475-b1de-c3a4a4ae65b1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f5a6cffa-7adc-4794-942d-377379b2d807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:16 compute-0 nova_compute[253512]: 2025-11-25 09:59:16.729 253516 DEBUG nova.compute.manager [req-ec004c57-42d3-49b3-b416-55be648f7e07 req-a8c1cd76-23a1-4475-b1de-c3a4a4ae65b1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] No waiting events found dispatching network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:59:16 compute-0 nova_compute[253512]: 2025-11-25 09:59:16.729 253516 WARNING nova.compute.manager [req-ec004c57-42d3-49b3-b416-55be648f7e07 req-a8c1cd76-23a1-4475-b1de-c3a4a4ae65b1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Received unexpected event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 for instance with vm_state deleted and task_state None.
Nov 25 09:59:16 compute-0 podman[267752]: 2025-11-25 09:59:16.833512981 +0000 UTC m=+0.027387372 container create fed8e7208b9501391dfbfa598a394b32293c82040b9ee75e719d345879e31f7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_sammet, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:59:16 compute-0 systemd[1]: Started libpod-conmon-fed8e7208b9501391dfbfa598a394b32293c82040b9ee75e719d345879e31f7e.scope.
Nov 25 09:59:16 compute-0 ceph-mon[74207]: pgmap v841: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Nov 25 09:59:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3465213788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3db3e56822eda5f8d99d2e8bbec78c7241fe76d80fb3cc34ec4cd605d21a9f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3db3e56822eda5f8d99d2e8bbec78c7241fe76d80fb3cc34ec4cd605d21a9f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3db3e56822eda5f8d99d2e8bbec78c7241fe76d80fb3cc34ec4cd605d21a9f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3db3e56822eda5f8d99d2e8bbec78c7241fe76d80fb3cc34ec4cd605d21a9f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:16 compute-0 podman[267752]: 2025-11-25 09:59:16.891259188 +0000 UTC m=+0.085133598 container init fed8e7208b9501391dfbfa598a394b32293c82040b9ee75e719d345879e31f7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 09:59:16 compute-0 podman[267752]: 2025-11-25 09:59:16.897777495 +0000 UTC m=+0.091651887 container start fed8e7208b9501391dfbfa598a394b32293c82040b9ee75e719d345879e31f7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_sammet, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 09:59:16 compute-0 podman[267752]: 2025-11-25 09:59:16.898966357 +0000 UTC m=+0.092840748 container attach fed8e7208b9501391dfbfa598a394b32293c82040b9ee75e719d345879e31f7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_sammet, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 09:59:16 compute-0 podman[267752]: 2025-11-25 09:59:16.823110662 +0000 UTC m=+0.016985072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 09:59:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:17.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:17.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:17.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:17.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:17 compute-0 lvm[267843]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 09:59:17 compute-0 lvm[267843]: VG ceph_vg0 finished
Nov 25 09:59:17 compute-0 crazy_sammet[267766]: {}
Nov 25 09:59:17 compute-0 systemd[1]: libpod-fed8e7208b9501391dfbfa598a394b32293c82040b9ee75e719d345879e31f7e.scope: Deactivated successfully.
Nov 25 09:59:17 compute-0 podman[267752]: 2025-11-25 09:59:17.418818846 +0000 UTC m=+0.612693237 container died fed8e7208b9501391dfbfa598a394b32293c82040b9ee75e719d345879e31f7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 25 09:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3db3e56822eda5f8d99d2e8bbec78c7241fe76d80fb3cc34ec4cd605d21a9f8-merged.mount: Deactivated successfully.
Nov 25 09:59:17 compute-0 podman[267752]: 2025-11-25 09:59:17.442528314 +0000 UTC m=+0.636402705 container remove fed8e7208b9501391dfbfa598a394b32293c82040b9ee75e719d345879e31f7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_sammet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 09:59:17 compute-0 systemd[1]: libpod-conmon-fed8e7208b9501391dfbfa598a394b32293c82040b9ee75e719d345879e31f7e.scope: Deactivated successfully.
Nov 25 09:59:17 compute-0 sudo[267661]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 09:59:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:59:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 09:59:17 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:59:17 compute-0 sudo[267854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 09:59:17 compute-0 sudo[267854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:17 compute-0 sudo[267854]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v842: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 09:59:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:18.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:18 compute-0 nova_compute[253512]: 2025-11-25 09:59:18.375 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:18 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:18.375 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:59:18 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:18.376 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 09:59:18 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:18.377 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:18.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:59:18 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 09:59:18 compute-0 ceph-mon[74207]: pgmap v842: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 09:59:18 compute-0 nova_compute[253512]: 2025-11-25 09:59:18.884 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v843: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 09:59:19 compute-0 nova_compute[253512]: 2025-11-25 09:59:19.756 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:20.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:59:20] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Nov 25 09:59:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:59:20] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Nov 25 09:59:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:20.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:20 compute-0 ceph-mon[74207]: pgmap v843: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 09:59:20 compute-0 nova_compute[253512]: 2025-11-25 09:59:20.639 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:59:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/801279752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:20 compute-0 nova_compute[253512]: 2025-11-25 09:59:20.739 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v844: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 25 09:59:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/801279752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/253252585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:22.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:22.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:22 compute-0 ceph-mon[74207]: pgmap v844: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 25 09:59:22 compute-0 sudo[267886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:59:22 compute-0 sudo[267886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:22 compute-0 sudo[267886]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v845: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 708 KiB/s rd, 1.4 KiB/s wr, 49 op/s
Nov 25 09:59:23 compute-0 nova_compute[253512]: 2025-11-25 09:59:23.887 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:24 compute-0 nova_compute[253512]: 2025-11-25 09:59:24.470 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:24 compute-0 nova_compute[253512]: 2025-11-25 09:59:24.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:24.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:24 compute-0 nova_compute[253512]: 2025-11-25 09:59:24.488 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:24 compute-0 nova_compute[253512]: 2025-11-25 09:59:24.488 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:24 compute-0 nova_compute[253512]: 2025-11-25 09:59:24.488 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:24 compute-0 nova_compute[253512]: 2025-11-25 09:59:24.488 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 09:59:24 compute-0 nova_compute[253512]: 2025-11-25 09:59:24.489 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:24 compute-0 ceph-mon[74207]: pgmap v845: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 708 KiB/s rd, 1.4 KiB/s wr, 49 op/s
Nov 25 09:59:24 compute-0 nova_compute[253512]: 2025-11-25 09:59:24.757 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:59:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280660865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:24 compute-0 nova_compute[253512]: 2025-11-25 09:59:24.825 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:25 compute-0 nova_compute[253512]: 2025-11-25 09:59:25.015 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:59:25 compute-0 nova_compute[253512]: 2025-11-25 09:59:25.017 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4559MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 09:59:25 compute-0 nova_compute[253512]: 2025-11-25 09:59:25.017 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:25 compute-0 nova_compute[253512]: 2025-11-25 09:59:25.017 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:25 compute-0 nova_compute[253512]: 2025-11-25 09:59:25.067 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 09:59:25 compute-0 nova_compute[253512]: 2025-11-25 09:59:25.068 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 09:59:25 compute-0 nova_compute[253512]: 2025-11-25 09:59:25.105 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:59:25 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/792236702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:25 compute-0 nova_compute[253512]: 2025-11-25 09:59:25.442 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:25 compute-0 nova_compute[253512]: 2025-11-25 09:59:25.445 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:59:25 compute-0 nova_compute[253512]: 2025-11-25 09:59:25.455 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:59:25 compute-0 nova_compute[253512]: 2025-11-25 09:59:25.471 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 09:59:25 compute-0 nova_compute[253512]: 2025-11-25 09:59:25.471 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v846: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 708 KiB/s rd, 1.4 KiB/s wr, 49 op/s
Nov 25 09:59:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3280660865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3985065017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/792236702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:25 compute-0 podman[267960]: 2025-11-25 09:59:25.975384093 +0000 UTC m=+0.036776110 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 09:59:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:26.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:26 compute-0 nova_compute[253512]: 2025-11-25 09:59:26.468 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:26 compute-0 nova_compute[253512]: 2025-11-25 09:59:26.468 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:26 compute-0 nova_compute[253512]: 2025-11-25 09:59:26.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:26 compute-0 nova_compute[253512]: 2025-11-25 09:59:26.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:26.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:26 compute-0 ceph-mon[74207]: pgmap v846: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 708 KiB/s rd, 1.4 KiB/s wr, 49 op/s
Nov 25 09:59:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/235832975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:27.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:27.064Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:27.065Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:27.065Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v847: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 709 KiB/s rd, 1.4 KiB/s wr, 49 op/s
Nov 25 09:59:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:28.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:28 compute-0 nova_compute[253512]: 2025-11-25 09:59:28.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:28 compute-0 nova_compute[253512]: 2025-11-25 09:59:28.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 09:59:28 compute-0 nova_compute[253512]: 2025-11-25 09:59:28.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 09:59:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:28.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:28 compute-0 nova_compute[253512]: 2025-11-25 09:59:28.482 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 09:59:28 compute-0 nova_compute[253512]: 2025-11-25 09:59:28.482 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:28 compute-0 nova_compute[253512]: 2025-11-25 09:59:28.482 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 09:59:28 compute-0 nova_compute[253512]: 2025-11-25 09:59:28.482 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 09:59:28 compute-0 ceph-mon[74207]: pgmap v847: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 709 KiB/s rd, 1.4 KiB/s wr, 49 op/s
Nov 25 09:59:28 compute-0 nova_compute[253512]: 2025-11-25 09:59:28.851 253516 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064753.8504486, f5a6cffa-7adc-4794-942d-377379b2d807 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:59:28 compute-0 nova_compute[253512]: 2025-11-25 09:59:28.851 253516 INFO nova.compute.manager [-] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] VM Stopped (Lifecycle Event)
Nov 25 09:59:28 compute-0 nova_compute[253512]: 2025-11-25 09:59:28.868 253516 DEBUG nova.compute.manager [None req-4fe5e184-6db4-479d-a691-5f97c92f357c - - - - - -] [instance: f5a6cffa-7adc-4794-942d-377379b2d807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:59:28 compute-0 nova_compute[253512]: 2025-11-25 09:59:28.890 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v848: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 0 op/s
Nov 25 09:59:29 compute-0 nova_compute[253512]: 2025-11-25 09:59:29.757 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:59:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:59:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:30.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:59:30] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Nov 25 09:59:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:59:30] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Nov 25 09:59:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.002000021s ======
Nov 25 09:59:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:30.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Nov 25 09:59:30 compute-0 ceph-mon[74207]: pgmap v848: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 0 op/s
Nov 25 09:59:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:59:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v849: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Nov 25 09:59:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:32.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:32.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:32 compute-0 nova_compute[253512]: 2025-11-25 09:59:32.608 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "f45667c6-e5ff-4c8f-9703-e024233fe578" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:32 compute-0 nova_compute[253512]: 2025-11-25 09:59:32.608 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:32 compute-0 nova_compute[253512]: 2025-11-25 09:59:32.622 253516 DEBUG nova.compute.manager [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 09:59:32 compute-0 ceph-mon[74207]: pgmap v849: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Nov 25 09:59:32 compute-0 nova_compute[253512]: 2025-11-25 09:59:32.673 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:32 compute-0 nova_compute[253512]: 2025-11-25 09:59:32.673 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:32 compute-0 nova_compute[253512]: 2025-11-25 09:59:32.677 253516 DEBUG nova.virt.hardware [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 09:59:32 compute-0 nova_compute[253512]: 2025-11-25 09:59:32.678 253516 INFO nova.compute.claims [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Claim successful on node compute-0.ctlplane.example.com
Nov 25 09:59:32 compute-0 nova_compute[253512]: 2025-11-25 09:59:32.752 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:32 compute-0 podman[268002]: 2025-11-25 09:59:32.991406663 +0000 UTC m=+0.053976298 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 25 09:59:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 09:59:33 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2430379519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.093 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.096 253516 DEBUG nova.compute.provider_tree [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.109 253516 DEBUG nova.scheduler.client.report [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.128 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.128 253516 DEBUG nova.compute.manager [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.161 253516 DEBUG nova.compute.manager [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.161 253516 DEBUG nova.network.neutron [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.176 253516 INFO nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.185 253516 DEBUG nova.compute.manager [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.249 253516 DEBUG nova.compute.manager [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.250 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.250 253516 INFO nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Creating image(s)
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.268 253516 DEBUG nova.storage.rbd_utils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f45667c6-e5ff-4c8f-9703-e024233fe578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.285 253516 DEBUG nova.storage.rbd_utils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f45667c6-e5ff-4c8f-9703-e024233fe578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.301 253516 DEBUG nova.storage.rbd_utils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f45667c6-e5ff-4c8f-9703-e024233fe578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.303 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.336 253516 DEBUG nova.policy [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.349 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.350 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.350 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.351 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.369 253516 DEBUG nova.storage.rbd_utils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f45667c6-e5ff-4c8f-9703-e024233fe578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.372 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 f45667c6-e5ff-4c8f-9703-e024233fe578_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.501 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 f45667c6-e5ff-4c8f-9703-e024233fe578_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.545 253516 DEBUG nova.storage.rbd_utils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image f45667c6-e5ff-4c8f-9703-e024233fe578_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 09:59:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v850: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.596 253516 DEBUG nova.objects.instance [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid f45667c6-e5ff-4c8f-9703-e024233fe578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.608 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.608 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Ensure instance console log exists: /var/lib/nova/instances/f45667c6-e5ff-4c8f-9703-e024233fe578/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.609 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.609 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.609 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:33 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2430379519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 09:59:33 compute-0 nova_compute[253512]: 2025-11-25 09:59:33.891 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:34 compute-0 nova_compute[253512]: 2025-11-25 09:59:34.189 253516 DEBUG nova.network.neutron [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Successfully created port: 80bccd30-71a6-4a5c-b968-217f0b369151 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 09:59:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:34.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:34.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:34 compute-0 ceph-mon[74207]: pgmap v850: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:59:34 compute-0 nova_compute[253512]: 2025-11-25 09:59:34.759 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:34 compute-0 nova_compute[253512]: 2025-11-25 09:59:34.905 253516 DEBUG nova.network.neutron [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Successfully updated port: 80bccd30-71a6-4a5c-b968-217f0b369151 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 09:59:34 compute-0 nova_compute[253512]: 2025-11-25 09:59:34.915 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-f45667c6-e5ff-4c8f-9703-e024233fe578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:59:34 compute-0 nova_compute[253512]: 2025-11-25 09:59:34.915 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-f45667c6-e5ff-4c8f-9703-e024233fe578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:59:34 compute-0 nova_compute[253512]: 2025-11-25 09:59:34.916 253516 DEBUG nova.network.neutron [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 09:59:34 compute-0 nova_compute[253512]: 2025-11-25 09:59:34.969 253516 DEBUG nova.compute.manager [req-9caf57d2-78d0-450b-b932-3e3d7dafab3f req-ee3d3aa3-64ed-4696-983a-360192531b63 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Received event network-changed-80bccd30-71a6-4a5c-b968-217f0b369151 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:59:34 compute-0 nova_compute[253512]: 2025-11-25 09:59:34.970 253516 DEBUG nova.compute.manager [req-9caf57d2-78d0-450b-b932-3e3d7dafab3f req-ee3d3aa3-64ed-4696-983a-360192531b63 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Refreshing instance network info cache due to event network-changed-80bccd30-71a6-4a5c-b968-217f0b369151. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:59:34 compute-0 nova_compute[253512]: 2025-11-25 09:59:34.970 253516 DEBUG oslo_concurrency.lockutils [req-9caf57d2-78d0-450b-b932-3e3d7dafab3f req-ee3d3aa3-64ed-4696-983a-360192531b63 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-f45667c6-e5ff-4c8f-9703-e024233fe578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:59:35 compute-0 nova_compute[253512]: 2025-11-25 09:59:35.027 253516 DEBUG nova.network.neutron [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 09:59:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v851: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:59:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:36.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:36.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:36 compute-0 ceph-mon[74207]: pgmap v851: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.678 253516 DEBUG nova.network.neutron [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Updating instance_info_cache with network_info: [{"id": "80bccd30-71a6-4a5c-b968-217f0b369151", "address": "fa:16:3e:2c:68:1b", "network": {"id": "cd37924c-e3f9-4681-8036-2dfe5148e873", "bridge": "br-int", "label": "tempest-network-smoke--375808395", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80bccd30-71", "ovs_interfaceid": "80bccd30-71a6-4a5c-b968-217f0b369151", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.695 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-f45667c6-e5ff-4c8f-9703-e024233fe578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.695 253516 DEBUG nova.compute.manager [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Instance network_info: |[{"id": "80bccd30-71a6-4a5c-b968-217f0b369151", "address": "fa:16:3e:2c:68:1b", "network": {"id": "cd37924c-e3f9-4681-8036-2dfe5148e873", "bridge": "br-int", "label": "tempest-network-smoke--375808395", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80bccd30-71", "ovs_interfaceid": "80bccd30-71a6-4a5c-b968-217f0b369151", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.696 253516 DEBUG oslo_concurrency.lockutils [req-9caf57d2-78d0-450b-b932-3e3d7dafab3f req-ee3d3aa3-64ed-4696-983a-360192531b63 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-f45667c6-e5ff-4c8f-9703-e024233fe578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.696 253516 DEBUG nova.network.neutron [req-9caf57d2-78d0-450b-b932-3e3d7dafab3f req-ee3d3aa3-64ed-4696-983a-360192531b63 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Refreshing network info cache for port 80bccd30-71a6-4a5c-b968-217f0b369151 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.698 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Start _get_guest_xml network_info=[{"id": "80bccd30-71a6-4a5c-b968-217f0b369151", "address": "fa:16:3e:2c:68:1b", "network": {"id": "cd37924c-e3f9-4681-8036-2dfe5148e873", "bridge": "br-int", "label": "tempest-network-smoke--375808395", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80bccd30-71", "ovs_interfaceid": "80bccd30-71a6-4a5c-b968-217f0b369151", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'encrypted': False, 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.701 253516 WARNING nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.706 253516 DEBUG nova.virt.libvirt.host [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.706 253516 DEBUG nova.virt.libvirt.host [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.708 253516 DEBUG nova.virt.libvirt.host [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.708 253516 DEBUG nova.virt.libvirt.host [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.709 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.709 253516 DEBUG nova.virt.hardware [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.709 253516 DEBUG nova.virt.hardware [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.710 253516 DEBUG nova.virt.hardware [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.710 253516 DEBUG nova.virt.hardware [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.710 253516 DEBUG nova.virt.hardware [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.710 253516 DEBUG nova.virt.hardware [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.710 253516 DEBUG nova.virt.hardware [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.710 253516 DEBUG nova.virt.hardware [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.711 253516 DEBUG nova.virt.hardware [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.711 253516 DEBUG nova.virt.hardware [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.711 253516 DEBUG nova.virt.hardware [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 09:59:36 compute-0 nova_compute[253512]: 2025-11-25 09:59:36.713 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:59:37 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1593650733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.053 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:37.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:37.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:37.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:37.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.073 253516 DEBUG nova.storage.rbd_utils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f45667c6-e5ff-4c8f-9703-e024233fe578_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.075 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 09:59:37 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/845505631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.414 253516 DEBUG nova.network.neutron [req-9caf57d2-78d0-450b-b932-3e3d7dafab3f req-ee3d3aa3-64ed-4696-983a-360192531b63 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Updated VIF entry in instance network info cache for port 80bccd30-71a6-4a5c-b968-217f0b369151. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.415 253516 DEBUG nova.network.neutron [req-9caf57d2-78d0-450b-b932-3e3d7dafab3f req-ee3d3aa3-64ed-4696-983a-360192531b63 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Updating instance_info_cache with network_info: [{"id": "80bccd30-71a6-4a5c-b968-217f0b369151", "address": "fa:16:3e:2c:68:1b", "network": {"id": "cd37924c-e3f9-4681-8036-2dfe5148e873", "bridge": "br-int", "label": "tempest-network-smoke--375808395", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80bccd30-71", "ovs_interfaceid": "80bccd30-71a6-4a5c-b968-217f0b369151", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.422 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.423 253516 DEBUG nova.virt.libvirt.vif [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:59:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1106207513',display_name='tempest-TestNetworkBasicOps-server-1106207513',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1106207513',id=10,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJboZEuszExrBq+nvCFMnpSf06Zid/CG09pXMThTrdsa5Tt+2Fa8SDcnFZ2+xfjH7RGEhXpXdX6ZEqbAMEuV4klziHi4NX1pVM6aAsT+NQdfrIjm1jzpO10iIK76Qkij0g==',key_name='tempest-TestNetworkBasicOps-833329281',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-xahlyy1s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:59:33Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=f45667c6-e5ff-4c8f-9703-e024233fe578,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "80bccd30-71a6-4a5c-b968-217f0b369151", "address": "fa:16:3e:2c:68:1b", "network": {"id": "cd37924c-e3f9-4681-8036-2dfe5148e873", "bridge": "br-int", "label": "tempest-network-smoke--375808395", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80bccd30-71", "ovs_interfaceid": "80bccd30-71a6-4a5c-b968-217f0b369151", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.424 253516 DEBUG nova.network.os_vif_util [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "80bccd30-71a6-4a5c-b968-217f0b369151", "address": "fa:16:3e:2c:68:1b", "network": {"id": "cd37924c-e3f9-4681-8036-2dfe5148e873", "bridge": "br-int", "label": "tempest-network-smoke--375808395", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80bccd30-71", "ovs_interfaceid": "80bccd30-71a6-4a5c-b968-217f0b369151", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.425 253516 DEBUG nova.network.os_vif_util [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:68:1b,bridge_name='br-int',has_traffic_filtering=True,id=80bccd30-71a6-4a5c-b968-217f0b369151,network=Network(cd37924c-e3f9-4681-8036-2dfe5148e873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80bccd30-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.426 253516 DEBUG nova.objects.instance [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid f45667c6-e5ff-4c8f-9703-e024233fe578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.435 253516 DEBUG oslo_concurrency.lockutils [req-9caf57d2-78d0-450b-b932-3e3d7dafab3f req-ee3d3aa3-64ed-4696-983a-360192531b63 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-f45667c6-e5ff-4c8f-9703-e024233fe578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.437 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] End _get_guest_xml xml=<domain type="kvm">
Nov 25 09:59:37 compute-0 nova_compute[253512]:   <uuid>f45667c6-e5ff-4c8f-9703-e024233fe578</uuid>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   <name>instance-0000000a</name>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   <memory>131072</memory>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   <vcpu>1</vcpu>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   <metadata>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <nova:name>tempest-TestNetworkBasicOps-server-1106207513</nova:name>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <nova:creationTime>2025-11-25 09:59:36</nova:creationTime>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <nova:flavor name="m1.nano">
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <nova:memory>128</nova:memory>
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <nova:disk>1</nova:disk>
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <nova:swap>0</nova:swap>
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <nova:vcpus>1</nova:vcpus>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       </nova:flavor>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <nova:owner>
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       </nova:owner>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <nova:ports>
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <nova:port uuid="80bccd30-71a6-4a5c-b968-217f0b369151">
Nov 25 09:59:37 compute-0 nova_compute[253512]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:         </nova:port>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       </nova:ports>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     </nova:instance>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   </metadata>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   <sysinfo type="smbios">
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <system>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <entry name="manufacturer">RDO</entry>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <entry name="product">OpenStack Compute</entry>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <entry name="serial">f45667c6-e5ff-4c8f-9703-e024233fe578</entry>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <entry name="uuid">f45667c6-e5ff-4c8f-9703-e024233fe578</entry>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <entry name="family">Virtual Machine</entry>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     </system>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   </sysinfo>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   <os>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <boot dev="hd"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <smbios mode="sysinfo"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   </os>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   <features>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <acpi/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <apic/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <vmcoreinfo/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   </features>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   <clock offset="utc">
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <timer name="hpet" present="no"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   </clock>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   <cpu mode="host-model" match="exact">
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   </cpu>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   <devices>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <disk type="network" device="disk">
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <driver type="raw" cache="none"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <source protocol="rbd" name="vms/f45667c6-e5ff-4c8f-9703-e024233fe578_disk">
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       </source>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <auth username="openstack">
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <target dev="vda" bus="virtio"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <disk type="network" device="cdrom">
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <driver type="raw" cache="none"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <source protocol="rbd" name="vms/f45667c6-e5ff-4c8f-9703-e024233fe578_disk.config">
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <host name="192.168.122.100" port="6789"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <host name="192.168.122.102" port="6789"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <host name="192.168.122.101" port="6789"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       </source>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <auth username="openstack">
Nov 25 09:59:37 compute-0 nova_compute[253512]:         <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       </auth>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <target dev="sda" bus="sata"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     </disk>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <interface type="ethernet">
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <mac address="fa:16:3e:2c:68:1b"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <model type="virtio"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <mtu size="1442"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <target dev="tap80bccd30-71"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     </interface>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <serial type="pty">
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <log file="/var/lib/nova/instances/f45667c6-e5ff-4c8f-9703-e024233fe578/console.log" append="off"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     </serial>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <video>
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <model type="virtio"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     </video>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <input type="tablet" bus="usb"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <rng model="virtio">
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <backend model="random">/dev/urandom</backend>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     </rng>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <controller type="usb" index="0"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     <memballoon model="virtio">
Nov 25 09:59:37 compute-0 nova_compute[253512]:       <stats period="10"/>
Nov 25 09:59:37 compute-0 nova_compute[253512]:     </memballoon>
Nov 25 09:59:37 compute-0 nova_compute[253512]:   </devices>
Nov 25 09:59:37 compute-0 nova_compute[253512]: </domain>
Nov 25 09:59:37 compute-0 nova_compute[253512]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.438 253516 DEBUG nova.compute.manager [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Preparing to wait for external event network-vif-plugged-80bccd30-71a6-4a5c-b968-217f0b369151 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.438 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.438 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.438 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.439 253516 DEBUG nova.virt.libvirt.vif [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:59:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1106207513',display_name='tempest-TestNetworkBasicOps-server-1106207513',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1106207513',id=10,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJboZEuszExrBq+nvCFMnpSf06Zid/CG09pXMThTrdsa5Tt+2Fa8SDcnFZ2+xfjH7RGEhXpXdX6ZEqbAMEuV4klziHi4NX1pVM6aAsT+NQdfrIjm1jzpO10iIK76Qkij0g==',key_name='tempest-TestNetworkBasicOps-833329281',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-xahlyy1s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:59:33Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=f45667c6-e5ff-4c8f-9703-e024233fe578,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "80bccd30-71a6-4a5c-b968-217f0b369151", "address": "fa:16:3e:2c:68:1b", "network": {"id": "cd37924c-e3f9-4681-8036-2dfe5148e873", "bridge": "br-int", "label": "tempest-network-smoke--375808395", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80bccd30-71", "ovs_interfaceid": "80bccd30-71a6-4a5c-b968-217f0b369151", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.439 253516 DEBUG nova.network.os_vif_util [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "80bccd30-71a6-4a5c-b968-217f0b369151", "address": "fa:16:3e:2c:68:1b", "network": {"id": "cd37924c-e3f9-4681-8036-2dfe5148e873", "bridge": "br-int", "label": "tempest-network-smoke--375808395", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80bccd30-71", "ovs_interfaceid": "80bccd30-71a6-4a5c-b968-217f0b369151", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.439 253516 DEBUG nova.network.os_vif_util [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:68:1b,bridge_name='br-int',has_traffic_filtering=True,id=80bccd30-71a6-4a5c-b968-217f0b369151,network=Network(cd37924c-e3f9-4681-8036-2dfe5148e873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80bccd30-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.439 253516 DEBUG os_vif [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:68:1b,bridge_name='br-int',has_traffic_filtering=True,id=80bccd30-71a6-4a5c-b968-217f0b369151,network=Network(cd37924c-e3f9-4681-8036-2dfe5148e873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80bccd30-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.440 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.440 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.440 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.442 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.442 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80bccd30-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.443 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap80bccd30-71, col_values=(('external_ids', {'iface-id': '80bccd30-71a6-4a5c-b968-217f0b369151', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:68:1b', 'vm-uuid': 'f45667c6-e5ff-4c8f-9703-e024233fe578'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.444 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:37 compute-0 NetworkManager[48903]: <info>  [1764064777.4448] manager: (tap80bccd30-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.446 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.449 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.449 253516 INFO os_vif [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:68:1b,bridge_name='br-int',has_traffic_filtering=True,id=80bccd30-71a6-4a5c-b968-217f0b369151,network=Network(cd37924c-e3f9-4681-8036-2dfe5148e873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80bccd30-71')
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.482 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.482 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.483 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:2c:68:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.483 253516 INFO nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Using config drive
Nov 25 09:59:37 compute-0 nova_compute[253512]: 2025-11-25 09:59:37.498 253516 DEBUG nova.storage.rbd_utils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f45667c6-e5ff-4c8f-9703-e024233fe578_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v852: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:59:37 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1593650733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:59:37 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/845505631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 09:59:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.169 253516 INFO nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Creating config drive at /var/lib/nova/instances/f45667c6-e5ff-4c8f-9703-e024233fe578/disk.config
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.174 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f45667c6-e5ff-4c8f-9703-e024233fe578/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxp0wlg9s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:38.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.290 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f45667c6-e5ff-4c8f-9703-e024233fe578/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxp0wlg9s" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.307 253516 DEBUG nova.storage.rbd_utils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image f45667c6-e5ff-4c8f-9703-e024233fe578_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.309 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f45667c6-e5ff-4c8f-9703-e024233fe578/disk.config f45667c6-e5ff-4c8f-9703-e024233fe578_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.387 253516 DEBUG oslo_concurrency.processutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f45667c6-e5ff-4c8f-9703-e024233fe578/disk.config f45667c6-e5ff-4c8f-9703-e024233fe578_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.388 253516 INFO nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Deleting local config drive /var/lib/nova/instances/f45667c6-e5ff-4c8f-9703-e024233fe578/disk.config because it was imported into RBD.
Nov 25 09:59:38 compute-0 kernel: tap80bccd30-71: entered promiscuous mode
Nov 25 09:59:38 compute-0 NetworkManager[48903]: <info>  [1764064778.4207] manager: (tap80bccd30-71): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Nov 25 09:59:38 compute-0 ovn_controller[155020]: 2025-11-25T09:59:38Z|00096|binding|INFO|Claiming lport 80bccd30-71a6-4a5c-b968-217f0b369151 for this chassis.
Nov 25 09:59:38 compute-0 ovn_controller[155020]: 2025-11-25T09:59:38Z|00097|binding|INFO|80bccd30-71a6-4a5c-b968-217f0b369151: Claiming fa:16:3e:2c:68:1b 10.100.0.12
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.423 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.425 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.436 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:68:1b 10.100.0.12'], port_security=['fa:16:3e:2c:68:1b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f45667c6-e5ff-4c8f-9703-e024233fe578', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd37924c-e3f9-4681-8036-2dfe5148e873', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'db961b1f-db28-49d9-9e53-a56c89630a87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68596f83-283d-427b-947e-397f8c4aea87, chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=80bccd30-71a6-4a5c-b968-217f0b369151) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.437 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 80bccd30-71a6-4a5c-b968-217f0b369151 in datapath cd37924c-e3f9-4681-8036-2dfe5148e873 bound to our chassis
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.437 164791 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd37924c-e3f9-4681-8036-2dfe5148e873
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.445 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[bab44dfa-f7cf-4ed9-9ea1-7dbb98fe4aea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.447 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcd37924c-e1 in ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.448 258952 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcd37924c-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.448 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[92726df5-374b-42b0-b3ba-eaabdcdfa1eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.449 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa969b8-c2fd-461e-9a5a-cd1dece7b9e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 systemd-udevd[268333]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:59:38 compute-0 NetworkManager[48903]: <info>  [1764064778.4588] device (tap80bccd30-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:59:38 compute-0 NetworkManager[48903]: <info>  [1764064778.4594] device (tap80bccd30-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 09:59:38 compute-0 systemd-machined[216497]: New machine qemu-5-instance-0000000a.
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.463 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[70e3ab8e-b1c2-40ee-95fe-67489231c534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.483 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[738c9c4a-d45c-4de6-8554-a5ecb6b0092e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-0000000a.
Nov 25 09:59:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:38.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.505 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[701ca150-a4fe-44b5-882c-9cf9d0e26033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.506 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:38 compute-0 NetworkManager[48903]: <info>  [1764064778.5104] manager: (tapcd37924c-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Nov 25 09:59:38 compute-0 systemd-udevd[268337]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.511 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb278c4-6bab-4eb1-aae7-e074eaa0309d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 ovn_controller[155020]: 2025-11-25T09:59:38Z|00098|binding|INFO|Setting lport 80bccd30-71a6-4a5c-b968-217f0b369151 ovn-installed in OVS
Nov 25 09:59:38 compute-0 ovn_controller[155020]: 2025-11-25T09:59:38Z|00099|binding|INFO|Setting lport 80bccd30-71a6-4a5c-b968-217f0b369151 up in Southbound
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.515 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.534 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc4c01f-57b0-4080-aaa3-6ccb167d6144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.536 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[966be555-ed17-42d6-9feb-f0f9d2fad84a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 NetworkManager[48903]: <info>  [1764064778.5513] device (tapcd37924c-e0): carrier: link connected
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.554 258968 DEBUG oslo.privsep.daemon [-] privsep: reply[be89a33a-0390-4392-85af-5881ea6a08d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.567 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[38e50431-0046-4596-8f7b-94c926dd2fe3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd37924c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:17:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 355912, 'reachable_time': 38013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268358, 'error': None, 'target': 'ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.577 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4e343f-1c28-4ce1-97ff-a79f548b4572]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:17d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 355912, 'tstamp': 355912}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268359, 'error': None, 'target': 'ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.591 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[6bddc921-4275-4ffc-bfd6-074c880fc831]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd37924c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:17:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 355912, 'reachable_time': 38013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268360, 'error': None, 'target': 'ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.613 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[a870dbfc-12bc-40fb-bdac-79d5b065644e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.644 253516 DEBUG nova.compute.manager [req-ae309da8-a738-4a0e-9517-7456d6194fb9 req-bb8febc8-d3c6-427b-900d-fac63d1e2aa8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Received event network-vif-plugged-80bccd30-71a6-4a5c-b968-217f0b369151 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.644 253516 DEBUG oslo_concurrency.lockutils [req-ae309da8-a738-4a0e-9517-7456d6194fb9 req-bb8febc8-d3c6-427b-900d-fac63d1e2aa8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.644 253516 DEBUG oslo_concurrency.lockutils [req-ae309da8-a738-4a0e-9517-7456d6194fb9 req-bb8febc8-d3c6-427b-900d-fac63d1e2aa8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.645 253516 DEBUG oslo_concurrency.lockutils [req-ae309da8-a738-4a0e-9517-7456d6194fb9 req-bb8febc8-d3c6-427b-900d-fac63d1e2aa8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.645 253516 DEBUG nova.compute.manager [req-ae309da8-a738-4a0e-9517-7456d6194fb9 req-bb8febc8-d3c6-427b-900d-fac63d1e2aa8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Processing event network-vif-plugged-80bccd30-71a6-4a5c-b968-217f0b369151 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.653 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[9bdff15d-5782-4872-871c-69c9033db2aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.654 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd37924c-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.654 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.654 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd37924c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.656 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:38 compute-0 NetworkManager[48903]: <info>  [1764064778.6567] manager: (tapcd37924c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 25 09:59:38 compute-0 kernel: tapcd37924c-e0: entered promiscuous mode
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.658 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.660 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd37924c-e0, col_values=(('external_ids', {'iface-id': '778605f2-4998-45d7-94c6-d9516e3ec2db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 09:59:38 compute-0 ovn_controller[155020]: 2025-11-25T09:59:38Z|00100|binding|INFO|Releasing lport 778605f2-4998-45d7-94c6-d9516e3ec2db from this chassis (sb_readonly=0)
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.661 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.677 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.678 164791 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd37924c-e3f9-4681-8036-2dfe5148e873.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd37924c-e3f9-4681-8036-2dfe5148e873.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.678 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[93952985-0b33-408b-8380-c610d5d1cd46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.679 164791 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: global
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     log         /dev/log local0 debug
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     log-tag     haproxy-metadata-proxy-cd37924c-e3f9-4681-8036-2dfe5148e873
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     user        root
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     group       root
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     maxconn     1024
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     pidfile     /var/lib/neutron/external/pids/cd37924c-e3f9-4681-8036-2dfe5148e873.pid.haproxy
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     daemon
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: defaults
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     log global
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     mode http
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     option httplog
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     option dontlognull
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     option http-server-close
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     option forwardfor
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     retries                 3
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     timeout http-request    30s
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     timeout connect         30s
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     timeout client          32s
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     timeout server          32s
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     timeout http-keep-alive 30s
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: listen listener
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     bind 169.254.169.254:80
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:     http-request add-header X-OVN-Network-ID cd37924c-e3f9-4681-8036-2dfe5148e873
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 09:59:38 compute-0 ovn_metadata_agent[164786]: 2025-11-25 09:59:38.680 164791 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873', 'env', 'PROCESS_TAG=haproxy-cd37924c-e3f9-4681-8036-2dfe5148e873', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cd37924c-e3f9-4681-8036-2dfe5148e873.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 09:59:38 compute-0 ceph-mon[74207]: pgmap v852: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.763 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064778.7631269, f45667c6-e5ff-4c8f-9703-e024233fe578 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.763 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] VM Started (Lifecycle Event)
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.765 253516 DEBUG nova.compute.manager [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.767 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.770 253516 INFO nova.virt.libvirt.driver [-] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Instance spawned successfully.
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.770 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.781 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.784 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.787 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.787 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.788 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.788 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.788 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.789 253516 DEBUG nova.virt.libvirt.driver [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.804 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.810 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064778.763272, f45667c6-e5ff-4c8f-9703-e024233fe578 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.810 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] VM Paused (Lifecycle Event)
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.835 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.837 253516 DEBUG nova.virt.driver [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] Emitting event <LifecycleEvent: 1764064778.767746, f45667c6-e5ff-4c8f-9703-e024233fe578 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.837 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] VM Resumed (Lifecycle Event)
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.844 253516 INFO nova.compute.manager [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Took 5.59 seconds to spawn the instance on the hypervisor.
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.844 253516 DEBUG nova.compute.manager [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.861 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.863 253516 DEBUG nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.891 253516 INFO nova.compute.manager [None req-1c9de003-72fa-4ad9-9dd0-d3d793bdb97b - - - - - -] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.900 253516 INFO nova.compute.manager [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Took 6.25 seconds to build instance.
Nov 25 09:59:38 compute-0 nova_compute[253512]: 2025-11-25 09:59:38.919 253516 DEBUG oslo_concurrency.lockutils [None req-ebaa040d-ea2a-478f-8d28-d15368338718 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:38 compute-0 podman[268426]: 2025-11-25 09:59:38.986440147 +0000 UTC m=+0.050085763 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 09:59:38 compute-0 podman[268437]: 2025-11-25 09:59:38.992139191 +0000 UTC m=+0.038788404 container create 75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 09:59:39 compute-0 systemd[1]: Started libpod-conmon-75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085.scope.
Nov 25 09:59:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 09:59:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af10a8041ac47ae5bced874206dff3c28954a82080ea94fbbc54ed2297021f4f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 09:59:39 compute-0 podman[268437]: 2025-11-25 09:59:39.053517785 +0000 UTC m=+0.100167017 container init 75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 09:59:39 compute-0 podman[268437]: 2025-11-25 09:59:39.058081691 +0000 UTC m=+0.104730902 container start 75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:59:39 compute-0 podman[268437]: 2025-11-25 09:59:38.974109178 +0000 UTC m=+0.020758410 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 09:59:39 compute-0 neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873[268459]: [NOTICE]   (268463) : New worker (268465) forked
Nov 25 09:59:39 compute-0 neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873[268459]: [NOTICE]   (268463) : Loading success.
Nov 25 09:59:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v853: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:59:39 compute-0 nova_compute[253512]: 2025-11-25 09:59:39.760 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:40.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:59:40] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Nov 25 09:59:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:59:40] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Nov 25 09:59:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:40.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:40 compute-0 ceph-mon[74207]: pgmap v853: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 09:59:40 compute-0 nova_compute[253512]: 2025-11-25 09:59:40.720 253516 DEBUG nova.compute.manager [req-0f311873-53ae-4d18-97b3-ce380e5fd12c req-a670a3da-92d3-430f-88db-863c627d3853 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Received event network-vif-plugged-80bccd30-71a6-4a5c-b968-217f0b369151 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:59:40 compute-0 nova_compute[253512]: 2025-11-25 09:59:40.720 253516 DEBUG oslo_concurrency.lockutils [req-0f311873-53ae-4d18-97b3-ce380e5fd12c req-a670a3da-92d3-430f-88db-863c627d3853 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 09:59:40 compute-0 nova_compute[253512]: 2025-11-25 09:59:40.720 253516 DEBUG oslo_concurrency.lockutils [req-0f311873-53ae-4d18-97b3-ce380e5fd12c req-a670a3da-92d3-430f-88db-863c627d3853 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 09:59:40 compute-0 nova_compute[253512]: 2025-11-25 09:59:40.720 253516 DEBUG oslo_concurrency.lockutils [req-0f311873-53ae-4d18-97b3-ce380e5fd12c req-a670a3da-92d3-430f-88db-863c627d3853 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 09:59:40 compute-0 nova_compute[253512]: 2025-11-25 09:59:40.721 253516 DEBUG nova.compute.manager [req-0f311873-53ae-4d18-97b3-ce380e5fd12c req-a670a3da-92d3-430f-88db-863c627d3853 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] No waiting events found dispatching network-vif-plugged-80bccd30-71a6-4a5c-b968-217f0b369151 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 09:59:40 compute-0 nova_compute[253512]: 2025-11-25 09:59:40.721 253516 WARNING nova.compute.manager [req-0f311873-53ae-4d18-97b3-ce380e5fd12c req-a670a3da-92d3-430f-88db-863c627d3853 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Received unexpected event network-vif-plugged-80bccd30-71a6-4a5c-b968-217f0b369151 for instance with vm_state active and task_state None.
Nov 25 09:59:41 compute-0 ovn_controller[155020]: 2025-11-25T09:59:41Z|00101|binding|INFO|Releasing lport 778605f2-4998-45d7-94c6-d9516e3ec2db from this chassis (sb_readonly=0)
Nov 25 09:59:41 compute-0 NetworkManager[48903]: <info>  [1764064781.4699] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 25 09:59:41 compute-0 NetworkManager[48903]: <info>  [1764064781.4706] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 25 09:59:41 compute-0 nova_compute[253512]: 2025-11-25 09:59:41.481 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:41 compute-0 ovn_controller[155020]: 2025-11-25T09:59:41Z|00102|binding|INFO|Releasing lport 778605f2-4998-45d7-94c6-d9516e3ec2db from this chassis (sb_readonly=0)
Nov 25 09:59:41 compute-0 nova_compute[253512]: 2025-11-25 09:59:41.514 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:41 compute-0 nova_compute[253512]: 2025-11-25 09:59:41.518 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v854: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:59:41 compute-0 nova_compute[253512]: 2025-11-25 09:59:41.697 253516 DEBUG nova.compute.manager [req-d4dd9fdf-f4c9-4018-8a35-2c148bbff58b req-fc3120aa-e29b-4ecc-9c0d-a54189137ffa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Received event network-changed-80bccd30-71a6-4a5c-b968-217f0b369151 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 09:59:41 compute-0 nova_compute[253512]: 2025-11-25 09:59:41.697 253516 DEBUG nova.compute.manager [req-d4dd9fdf-f4c9-4018-8a35-2c148bbff58b req-fc3120aa-e29b-4ecc-9c0d-a54189137ffa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Refreshing instance network info cache due to event network-changed-80bccd30-71a6-4a5c-b968-217f0b369151. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 09:59:41 compute-0 nova_compute[253512]: 2025-11-25 09:59:41.697 253516 DEBUG oslo_concurrency.lockutils [req-d4dd9fdf-f4c9-4018-8a35-2c148bbff58b req-fc3120aa-e29b-4ecc-9c0d-a54189137ffa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-f45667c6-e5ff-4c8f-9703-e024233fe578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 09:59:41 compute-0 nova_compute[253512]: 2025-11-25 09:59:41.698 253516 DEBUG oslo_concurrency.lockutils [req-d4dd9fdf-f4c9-4018-8a35-2c148bbff58b req-fc3120aa-e29b-4ecc-9c0d-a54189137ffa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-f45667c6-e5ff-4c8f-9703-e024233fe578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 09:59:41 compute-0 nova_compute[253512]: 2025-11-25 09:59:41.698 253516 DEBUG nova.network.neutron [req-d4dd9fdf-f4c9-4018-8a35-2c148bbff58b req-fc3120aa-e29b-4ecc-9c0d-a54189137ffa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Refreshing network info cache for port 80bccd30-71a6-4a5c-b968-217f0b369151 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 09:59:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:42.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:42 compute-0 nova_compute[253512]: 2025-11-25 09:59:42.445 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:42.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:42 compute-0 ceph-mon[74207]: pgmap v854: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:59:42 compute-0 sudo[268475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 09:59:42 compute-0 sudo[268475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 09:59:42 compute-0 sudo[268475]: pam_unix(sudo:session): session closed for user root
Nov 25 09:59:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v855: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:59:43 compute-0 nova_compute[253512]: 2025-11-25 09:59:43.580 253516 DEBUG nova.network.neutron [req-d4dd9fdf-f4c9-4018-8a35-2c148bbff58b req-fc3120aa-e29b-4ecc-9c0d-a54189137ffa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Updated VIF entry in instance network info cache for port 80bccd30-71a6-4a5c-b968-217f0b369151. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 09:59:43 compute-0 nova_compute[253512]: 2025-11-25 09:59:43.581 253516 DEBUG nova.network.neutron [req-d4dd9fdf-f4c9-4018-8a35-2c148bbff58b req-fc3120aa-e29b-4ecc-9c0d-a54189137ffa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Updating instance_info_cache with network_info: [{"id": "80bccd30-71a6-4a5c-b968-217f0b369151", "address": "fa:16:3e:2c:68:1b", "network": {"id": "cd37924c-e3f9-4681-8036-2dfe5148e873", "bridge": "br-int", "label": "tempest-network-smoke--375808395", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80bccd30-71", "ovs_interfaceid": "80bccd30-71a6-4a5c-b968-217f0b369151", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 09:59:43 compute-0 nova_compute[253512]: 2025-11-25 09:59:43.596 253516 DEBUG oslo_concurrency.lockutils [req-d4dd9fdf-f4c9-4018-8a35-2c148bbff58b req-fc3120aa-e29b-4ecc-9c0d-a54189137ffa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-f45667c6-e5ff-4c8f-9703-e024233fe578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 09:59:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:44.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:44.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:44 compute-0 ceph-mon[74207]: pgmap v855: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:59:44 compute-0 nova_compute[253512]: 2025-11-25 09:59:44.762 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_09:59:44
Nov 25 09:59:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 09:59:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 09:59:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['.rgw.root', 'volumes', '.nfs', '.mgr', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control']
Nov 25 09:59:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 09:59:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:59:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:59:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:59:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:59:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:59:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:59:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 09:59:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 09:59:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 09:59:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:59:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 09:59:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 09:59:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:59:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 09:59:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:59:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:59:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 09:59:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 09:59:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v856: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:59:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 09:59:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:46.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:46.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:46 compute-0 ceph-mon[74207]: pgmap v856: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:59:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:47.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:47.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:47.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:47.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:47 compute-0 nova_compute[253512]: 2025-11-25 09:59:47.445 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v857: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:59:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:48.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:48.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:48 compute-0 ceph-mon[74207]: pgmap v857: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 09:59:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v858: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 09:59:49 compute-0 nova_compute[253512]: 2025-11-25 09:59:49.763 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:50 compute-0 ovn_controller[155020]: 2025-11-25T09:59:50Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:68:1b 10.100.0.12
Nov 25 09:59:50 compute-0 ovn_controller[155020]: 2025-11-25T09:59:50Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:68:1b 10.100.0.12
Nov 25 09:59:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:50.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:09:59:50] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 25 09:59:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:09:59:50] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 25 09:59:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:50.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:50 compute-0 ceph-mon[74207]: pgmap v858: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 09:59:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v859: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Nov 25 09:59:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:52.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:52 compute-0 nova_compute[253512]: 2025-11-25 09:59:52.446 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 09:59:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:52.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 09:59:52 compute-0 ceph-mon[74207]: pgmap v859: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Nov 25 09:59:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v860: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:59:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 09:59:53 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1439725811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:59:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 09:59:53 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1439725811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:59:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:54.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:54.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:54 compute-0 ceph-mon[74207]: pgmap v860: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:59:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1439725811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 09:59:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1439725811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 09:59:54 compute-0 nova_compute[253512]: 2025-11-25 09:59:54.764 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 09:59:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v861: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:59:56 compute-0 nova_compute[253512]: 2025-11-25 09:59:56.076 253516 INFO nova.compute.manager [None req-a0b7017c-fcb7-4691-a8c8-1b6081d02ad1 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Get console output
Nov 25 09:59:56 compute-0 nova_compute[253512]: 2025-11-25 09:59:56.080 259829 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 09:59:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:56.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:56.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:56 compute-0 ceph-mon[74207]: pgmap v861: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:59:56 compute-0 podman[268515]: 2025-11-25 09:59:56.97599896 +0000 UTC m=+0.039131900 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 09:59:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:57.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:57.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:57.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T09:59:57.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 09:59:57 compute-0 nova_compute[253512]: 2025-11-25 09:59:57.448 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v862: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:59:57 compute-0 ovn_controller[155020]: 2025-11-25T09:59:57Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:68:1b 10.100.0.12
Nov 25 09:59:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 09:59:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:58.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 09:59:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 09:59:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:58.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 09:59:58 compute-0 ceph-mon[74207]: pgmap v862: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:59:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v863: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 09:59:59 compute-0 nova_compute[253512]: 2025-11-25 09:59:59.766 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 09:59:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 09:59:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:00:00 compute-0 ceph-mon[74207]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 25 10:00:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:00:00] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 25 10:00:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:00:00] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 25 10:00:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 10:00:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:00.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 10:00:00 compute-0 ovn_controller[155020]: 2025-11-25T10:00:00Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:68:1b 10.100.0.12
Nov 25 10:00:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:00.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:00 compute-0 ceph-mon[74207]: pgmap v863: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 10:00:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:00:00 compute-0 ceph-mon[74207]: overall HEALTH_OK
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.801 253516 DEBUG nova.compute.manager [req-c54e4bf7-c0af-456b-856d-783a018c273c req-dbf8454d-657b-4732-9d15-3d14fb802b14 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Received event network-changed-80bccd30-71a6-4a5c-b968-217f0b369151 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.801 253516 DEBUG nova.compute.manager [req-c54e4bf7-c0af-456b-856d-783a018c273c req-dbf8454d-657b-4732-9d15-3d14fb802b14 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Refreshing instance network info cache due to event network-changed-80bccd30-71a6-4a5c-b968-217f0b369151. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.802 253516 DEBUG oslo_concurrency.lockutils [req-c54e4bf7-c0af-456b-856d-783a018c273c req-dbf8454d-657b-4732-9d15-3d14fb802b14 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-f45667c6-e5ff-4c8f-9703-e024233fe578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.802 253516 DEBUG oslo_concurrency.lockutils [req-c54e4bf7-c0af-456b-856d-783a018c273c req-dbf8454d-657b-4732-9d15-3d14fb802b14 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-f45667c6-e5ff-4c8f-9703-e024233fe578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.802 253516 DEBUG nova.network.neutron [req-c54e4bf7-c0af-456b-856d-783a018c273c req-dbf8454d-657b-4732-9d15-3d14fb802b14 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Refreshing network info cache for port 80bccd30-71a6-4a5c-b968-217f0b369151 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.854 253516 DEBUG oslo_concurrency.lockutils [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "f45667c6-e5ff-4c8f-9703-e024233fe578" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.855 253516 DEBUG oslo_concurrency.lockutils [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.855 253516 DEBUG oslo_concurrency.lockutils [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.855 253516 DEBUG oslo_concurrency.lockutils [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.855 253516 DEBUG oslo_concurrency.lockutils [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.856 253516 INFO nova.compute.manager [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Terminating instance
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.857 253516 DEBUG nova.compute.manager [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 10:00:00 compute-0 kernel: tap80bccd30-71 (unregistering): left promiscuous mode
Nov 25 10:00:00 compute-0 NetworkManager[48903]: <info>  [1764064800.8918] device (tap80bccd30-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.892 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.901 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:00 compute-0 ovn_controller[155020]: 2025-11-25T10:00:00Z|00103|binding|INFO|Releasing lport 80bccd30-71a6-4a5c-b968-217f0b369151 from this chassis (sb_readonly=0)
Nov 25 10:00:00 compute-0 ovn_controller[155020]: 2025-11-25T10:00:00Z|00104|binding|INFO|Setting lport 80bccd30-71a6-4a5c-b968-217f0b369151 down in Southbound
Nov 25 10:00:00 compute-0 ovn_controller[155020]: 2025-11-25T10:00:00Z|00105|binding|INFO|Removing iface tap80bccd30-71 ovn-installed in OVS
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.903 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:00 compute-0 nova_compute[253512]: 2025-11-25 10:00:00.918 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:00 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:00.919 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:68:1b 10.100.0.12'], port_security=['fa:16:3e:2c:68:1b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f45667c6-e5ff-4c8f-9703-e024233fe578', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd37924c-e3f9-4681-8036-2dfe5148e873', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'db961b1f-db28-49d9-9e53-a56c89630a87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68596f83-283d-427b-947e-397f8c4aea87, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>], logical_port=80bccd30-71a6-4a5c-b968-217f0b369151) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbd73b50970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:00:00 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:00.920 164791 INFO neutron.agent.ovn.metadata.agent [-] Port 80bccd30-71a6-4a5c-b968-217f0b369151 in datapath cd37924c-e3f9-4681-8036-2dfe5148e873 unbound from our chassis
Nov 25 10:00:00 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:00.921 164791 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd37924c-e3f9-4681-8036-2dfe5148e873, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 10:00:00 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:00.922 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[04599002-5868-4f6f-91a4-6123e3fe7187]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:00 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:00.922 164791 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873 namespace which is not needed anymore
Nov 25 10:00:00 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 25 10:00:00 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000a.scope: Consumed 11.598s CPU time.
Nov 25 10:00:00 compute-0 systemd-machined[216497]: Machine qemu-5-instance-0000000a terminated.
Nov 25 10:00:01 compute-0 neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873[268459]: [NOTICE]   (268463) : haproxy version is 2.8.14-c23fe91
Nov 25 10:00:01 compute-0 neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873[268459]: [NOTICE]   (268463) : path to executable is /usr/sbin/haproxy
Nov 25 10:00:01 compute-0 neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873[268459]: [WARNING]  (268463) : Exiting Master process...
Nov 25 10:00:01 compute-0 neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873[268459]: [ALERT]    (268463) : Current worker (268465) exited with code 143 (Terminated)
Nov 25 10:00:01 compute-0 neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873[268459]: [WARNING]  (268463) : All workers exited. Exiting... (0)
Nov 25 10:00:01 compute-0 systemd[1]: libpod-75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085.scope: Deactivated successfully.
Nov 25 10:00:01 compute-0 podman[268556]: 2025-11-25 10:00:01.038778492 +0000 UTC m=+0.036232894 container died 75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 10:00:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085-userdata-shm.mount: Deactivated successfully.
Nov 25 10:00:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-af10a8041ac47ae5bced874206dff3c28954a82080ea94fbbc54ed2297021f4f-merged.mount: Deactivated successfully.
Nov 25 10:00:01 compute-0 podman[268556]: 2025-11-25 10:00:01.061168617 +0000 UTC m=+0.058623019 container cleanup 75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.088 253516 INFO nova.virt.libvirt.driver [-] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Instance destroyed successfully.
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.089 253516 DEBUG nova.objects.instance [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid f45667c6-e5ff-4c8f-9703-e024233fe578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:00:01 compute-0 systemd[1]: libpod-conmon-75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085.scope: Deactivated successfully.
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.101 253516 DEBUG nova.virt.libvirt.vif [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:59:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1106207513',display_name='tempest-TestNetworkBasicOps-server-1106207513',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1106207513',id=10,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJboZEuszExrBq+nvCFMnpSf06Zid/CG09pXMThTrdsa5Tt+2Fa8SDcnFZ2+xfjH7RGEhXpXdX6ZEqbAMEuV4klziHi4NX1pVM6aAsT+NQdfrIjm1jzpO10iIK76Qkij0g==',key_name='tempest-TestNetworkBasicOps-833329281',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:59:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-xahlyy1s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:59:38Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=f45667c6-e5ff-4c8f-9703-e024233fe578,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "80bccd30-71a6-4a5c-b968-217f0b369151", "address": "fa:16:3e:2c:68:1b", "network": {"id": "cd37924c-e3f9-4681-8036-2dfe5148e873", "bridge": "br-int", "label": "tempest-network-smoke--375808395", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80bccd30-71", "ovs_interfaceid": "80bccd30-71a6-4a5c-b968-217f0b369151", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.102 253516 DEBUG nova.network.os_vif_util [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "80bccd30-71a6-4a5c-b968-217f0b369151", "address": "fa:16:3e:2c:68:1b", "network": {"id": "cd37924c-e3f9-4681-8036-2dfe5148e873", "bridge": "br-int", "label": "tempest-network-smoke--375808395", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80bccd30-71", "ovs_interfaceid": "80bccd30-71a6-4a5c-b968-217f0b369151", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.103 253516 DEBUG nova.network.os_vif_util [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:68:1b,bridge_name='br-int',has_traffic_filtering=True,id=80bccd30-71a6-4a5c-b968-217f0b369151,network=Network(cd37924c-e3f9-4681-8036-2dfe5148e873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80bccd30-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.103 253516 DEBUG os_vif [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:68:1b,bridge_name='br-int',has_traffic_filtering=True,id=80bccd30-71a6-4a5c-b968-217f0b369151,network=Network(cd37924c-e3f9-4681-8036-2dfe5148e873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80bccd30-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.104 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.104 253516 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80bccd30-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.108 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.110 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.112 253516 INFO os_vif [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:68:1b,bridge_name='br-int',has_traffic_filtering=True,id=80bccd30-71a6-4a5c-b968-217f0b369151,network=Network(cd37924c-e3f9-4681-8036-2dfe5148e873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80bccd30-71')
Nov 25 10:00:01 compute-0 podman[268582]: 2025-11-25 10:00:01.117275391 +0000 UTC m=+0.029113611 container remove 75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 10:00:01 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:01.122 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[d762762d-a2c7-4dea-bd21-9946e9535c3e]: (4, ('Tue Nov 25 10:00:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873 (75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085)\n75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085\nTue Nov 25 10:00:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873 (75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085)\n75b1851ae109ea5800dc4cb2752139f1d6492cd9218f626ce13c3f72db7ff085\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:01 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:01.123 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[a9bc4848-a072-4281-a5e5-cc63743c42e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:01 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:01.124 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd37924c-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:00:01 compute-0 kernel: tapcd37924c-e0: left promiscuous mode
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.129 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.142 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:01 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:01.144 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[7032d3f2-9401-4642-9a26-43d87a891876]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:01 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:01.153 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[b69385df-eea8-482d-8d02-8fed6d19316c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:01 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:01.153 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[9907304f-870e-405f-8b55-59087c116082]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:01 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:01.165 258952 DEBUG oslo.privsep.daemon [-] privsep: reply[89ee172b-c8ed-48e6-99d8-9254860e9f8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 355907, 'reachable_time': 29109, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268621, 'error': None, 'target': 'ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:01 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:01.167 164901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cd37924c-e3f9-4681-8036-2dfe5148e873 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 10:00:01 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:01.167 164901 DEBUG oslo.privsep.daemon [-] privsep: reply[649e3826-755d-4c3a-8f86-5ab312a33bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:00:01 compute-0 systemd[1]: run-netns-ovnmeta\x2dcd37924c\x2de3f9\x2d4681\x2d8036\x2d2dfe5148e873.mount: Deactivated successfully.
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.278 253516 INFO nova.virt.libvirt.driver [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Deleting instance files /var/lib/nova/instances/f45667c6-e5ff-4c8f-9703-e024233fe578_del
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.279 253516 INFO nova.virt.libvirt.driver [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Deletion of /var/lib/nova/instances/f45667c6-e5ff-4c8f-9703-e024233fe578_del complete
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.313 253516 INFO nova.compute.manager [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Took 0.46 seconds to destroy the instance on the hypervisor.
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.313 253516 DEBUG oslo.service.loopingcall [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.313 253516 DEBUG nova.compute.manager [-] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.313 253516 DEBUG nova.network.neutron [-] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 10:00:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v864: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.813 253516 DEBUG nova.network.neutron [-] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.824 253516 INFO nova.compute.manager [-] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Took 0.51 seconds to deallocate network for instance.
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.857 253516 DEBUG oslo_concurrency.lockutils [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.857 253516 DEBUG oslo_concurrency.lockutils [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.876 253516 DEBUG nova.compute.manager [req-a06a64aa-e2ee-4390-80bf-dd272641469a req-338480cb-c98d-4e6d-acee-56b7aca9b552 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Received event network-vif-deleted-80bccd30-71a6-4a5c-b968-217f0b369151 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:00:01 compute-0 nova_compute[253512]: 2025-11-25 10:00:01.897 253516 DEBUG oslo_concurrency.processutils [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:00:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:00:02 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2758883569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.222 253516 DEBUG oslo_concurrency.processutils [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.227 253516 DEBUG nova.compute.provider_tree [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.236 253516 DEBUG nova.scheduler.client.report [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:00:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:02.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.268 253516 DEBUG oslo_concurrency.lockutils [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.291 253516 INFO nova.scheduler.client.report [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance f45667c6-e5ff-4c8f-9703-e024233fe578
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.323 253516 DEBUG nova.network.neutron [req-c54e4bf7-c0af-456b-856d-783a018c273c req-dbf8454d-657b-4732-9d15-3d14fb802b14 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Updated VIF entry in instance network info cache for port 80bccd30-71a6-4a5c-b968-217f0b369151. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.324 253516 DEBUG nova.network.neutron [req-c54e4bf7-c0af-456b-856d-783a018c273c req-dbf8454d-657b-4732-9d15-3d14fb802b14 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Updating instance_info_cache with network_info: [{"id": "80bccd30-71a6-4a5c-b968-217f0b369151", "address": "fa:16:3e:2c:68:1b", "network": {"id": "cd37924c-e3f9-4681-8036-2dfe5148e873", "bridge": "br-int", "label": "tempest-network-smoke--375808395", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80bccd30-71", "ovs_interfaceid": "80bccd30-71a6-4a5c-b968-217f0b369151", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.334 253516 DEBUG oslo_concurrency.lockutils [None req-683999f6-e84f-421b-bd1f-49472f9309f8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.335 253516 DEBUG oslo_concurrency.lockutils [req-c54e4bf7-c0af-456b-856d-783a018c273c req-dbf8454d-657b-4732-9d15-3d14fb802b14 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-f45667c6-e5ff-4c8f-9703-e024233fe578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:00:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:02.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:02 compute-0 sudo[268647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:00:02 compute-0 sudo[268647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:02 compute-0 ceph-mon[74207]: pgmap v864: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 25 10:00:02 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2758883569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:02 compute-0 sudo[268647]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.908 253516 DEBUG nova.compute.manager [req-01b31419-c989-4c68-8bd7-21978aceb597 req-c72f6938-ee7d-4731-9f5c-ddb88590ebe7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Received event network-vif-unplugged-80bccd30-71a6-4a5c-b968-217f0b369151 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.909 253516 DEBUG oslo_concurrency.lockutils [req-01b31419-c989-4c68-8bd7-21978aceb597 req-c72f6938-ee7d-4731-9f5c-ddb88590ebe7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.909 253516 DEBUG oslo_concurrency.lockutils [req-01b31419-c989-4c68-8bd7-21978aceb597 req-c72f6938-ee7d-4731-9f5c-ddb88590ebe7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.909 253516 DEBUG oslo_concurrency.lockutils [req-01b31419-c989-4c68-8bd7-21978aceb597 req-c72f6938-ee7d-4731-9f5c-ddb88590ebe7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.909 253516 DEBUG nova.compute.manager [req-01b31419-c989-4c68-8bd7-21978aceb597 req-c72f6938-ee7d-4731-9f5c-ddb88590ebe7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] No waiting events found dispatching network-vif-unplugged-80bccd30-71a6-4a5c-b968-217f0b369151 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.909 253516 WARNING nova.compute.manager [req-01b31419-c989-4c68-8bd7-21978aceb597 req-c72f6938-ee7d-4731-9f5c-ddb88590ebe7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Received unexpected event network-vif-unplugged-80bccd30-71a6-4a5c-b968-217f0b369151 for instance with vm_state deleted and task_state None.
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.909 253516 DEBUG nova.compute.manager [req-01b31419-c989-4c68-8bd7-21978aceb597 req-c72f6938-ee7d-4731-9f5c-ddb88590ebe7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Received event network-vif-plugged-80bccd30-71a6-4a5c-b968-217f0b369151 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.910 253516 DEBUG oslo_concurrency.lockutils [req-01b31419-c989-4c68-8bd7-21978aceb597 req-c72f6938-ee7d-4731-9f5c-ddb88590ebe7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.910 253516 DEBUG oslo_concurrency.lockutils [req-01b31419-c989-4c68-8bd7-21978aceb597 req-c72f6938-ee7d-4731-9f5c-ddb88590ebe7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.910 253516 DEBUG oslo_concurrency.lockutils [req-01b31419-c989-4c68-8bd7-21978aceb597 req-c72f6938-ee7d-4731-9f5c-ddb88590ebe7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "f45667c6-e5ff-4c8f-9703-e024233fe578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.910 253516 DEBUG nova.compute.manager [req-01b31419-c989-4c68-8bd7-21978aceb597 req-c72f6938-ee7d-4731-9f5c-ddb88590ebe7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] No waiting events found dispatching network-vif-plugged-80bccd30-71a6-4a5c-b968-217f0b369151 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:00:02 compute-0 nova_compute[253512]: 2025-11-25 10:00:02.910 253516 WARNING nova.compute.manager [req-01b31419-c989-4c68-8bd7-21978aceb597 req-c72f6938-ee7d-4731-9f5c-ddb88590ebe7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Received unexpected event network-vif-plugged-80bccd30-71a6-4a5c-b968-217f0b369151 for instance with vm_state deleted and task_state None.
Nov 25 10:00:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v865: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 18 KiB/s wr, 2 op/s
Nov 25 10:00:03 compute-0 podman[268674]: 2025-11-25 10:00:03.989348079 +0000 UTC m=+0.052531725 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 25 10:00:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:04.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:04.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:04 compute-0 nova_compute[253512]: 2025-11-25 10:00:04.769 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:04 compute-0 ceph-mon[74207]: pgmap v865: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 18 KiB/s wr, 2 op/s
Nov 25 10:00:04 compute-0 nova_compute[253512]: 2025-11-25 10:00:04.948 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:05 compute-0 nova_compute[253512]: 2025-11-25 10:00:05.046 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:05.388 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:05.388 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:05.388 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v866: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 18 KiB/s wr, 2 op/s
Nov 25 10:00:06 compute-0 nova_compute[253512]: 2025-11-25 10:00:06.108 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:06.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:06.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:06 compute-0 ceph-mon[74207]: pgmap v866: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 18 KiB/s wr, 2 op/s
Nov 25 10:00:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:07.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:07.070Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:07.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:07.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v867: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 19 KiB/s wr, 29 op/s
Nov 25 10:00:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:08.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:08.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:08 compute-0 ceph-mon[74207]: pgmap v867: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 19 KiB/s wr, 29 op/s
Nov 25 10:00:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v868: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Nov 25 10:00:09 compute-0 nova_compute[253512]: 2025-11-25 10:00:09.770 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:09 compute-0 podman[268706]: 2025-11-25 10:00:09.975599083 +0000 UTC m=+0.039665175 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:00:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:00:10] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 25 10:00:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:00:10] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 25 10:00:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:10.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 10:00:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:10.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 10:00:10 compute-0 ceph-mon[74207]: pgmap v868: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Nov 25 10:00:11 compute-0 nova_compute[253512]: 2025-11-25 10:00:11.110 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v869: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Nov 25 10:00:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:12.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:12.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:12 compute-0 ceph-mon[74207]: pgmap v869: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Nov 25 10:00:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v870: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 10:00:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:14.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:14.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:14 compute-0 nova_compute[253512]: 2025-11-25 10:00:14.771 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:14 compute-0 ceph-mon[74207]: pgmap v870: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 10:00:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:00:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:00:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:00:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:00:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:00:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:00:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:00:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:00:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 10:00:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:00:16 compute-0 nova_compute[253512]: 2025-11-25 10:00:16.084 253516 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064801.082858, f45667c6-e5ff-4c8f-9703-e024233fe578 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:00:16 compute-0 nova_compute[253512]: 2025-11-25 10:00:16.084 253516 INFO nova.compute.manager [-] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] VM Stopped (Lifecycle Event)
Nov 25 10:00:16 compute-0 nova_compute[253512]: 2025-11-25 10:00:16.099 253516 DEBUG nova.compute.manager [None req-698c68fe-0890-494c-aea3-abd8fc0e9347 - - - - - -] [instance: f45667c6-e5ff-4c8f-9703-e024233fe578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:00:16 compute-0 nova_compute[253512]: 2025-11-25 10:00:16.112 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:16.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:16.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:16 compute-0 ceph-mon[74207]: pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 10:00:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:17.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:17.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:17.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:17.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 10:00:17 compute-0 sudo[268729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:00:17 compute-0 sudo[268729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:17 compute-0 sudo[268729]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:17 compute-0 sudo[268755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 10:00:17 compute-0 sudo[268755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:18 compute-0 podman[268838]: 2025-11-25 10:00:18.152844602 +0000 UTC m=+0.040337425 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 25 10:00:18 compute-0 podman[268838]: 2025-11-25 10:00:18.232204469 +0000 UTC m=+0.119697280 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:00:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:18.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:18 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:18.462 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:00:18 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:18.462 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:00:18 compute-0 nova_compute[253512]: 2025-11-25 10:00:18.464 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:18.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:18 compute-0 podman[268948]: 2025-11-25 10:00:18.549989444 +0000 UTC m=+0.031819414 container exec e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:00:18 compute-0 podman[268948]: 2025-11-25 10:00:18.557055416 +0000 UTC m=+0.038885386 container exec_died e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:00:18 compute-0 podman[269017]: 2025-11-25 10:00:18.729693752 +0000 UTC m=+0.032381955 container exec 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:00:18 compute-0 podman[269017]: 2025-11-25 10:00:18.747043754 +0000 UTC m=+0.049731927 container exec_died 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:00:18 compute-0 ceph-mon[74207]: pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 10:00:18 compute-0 podman[269075]: 2025-11-25 10:00:18.875212815 +0000 UTC m=+0.031420060 container exec c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 10:00:18 compute-0 podman[269075]: 2025-11-25 10:00:18.996134734 +0000 UTC m=+0.152341979 container exec_died c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 10:00:19 compute-0 podman[269133]: 2025-11-25 10:00:19.119244507 +0000 UTC m=+0.031619487 container exec e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 10:00:19 compute-0 podman[269133]: 2025-11-25 10:00:19.127047598 +0000 UTC m=+0.039422558 container exec_died e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 10:00:19 compute-0 podman[269187]: 2025-11-25 10:00:19.256958515 +0000 UTC m=+0.037393184 container exec 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, release=1793, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 25 10:00:19 compute-0 podman[269187]: 2025-11-25 10:00:19.270073431 +0000 UTC m=+0.050508100 container exec_died 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, distribution-scope=public, release=1793, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, name=keepalived)
Nov 25 10:00:19 compute-0 podman[269240]: 2025-11-25 10:00:19.40388921 +0000 UTC m=+0.032669116 container exec 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:00:19 compute-0 ceph-osd[82261]: bluestore.MempoolThread fragmentation_score=0.000274 took=0.000032s
Nov 25 10:00:19 compute-0 podman[269240]: 2025-11-25 10:00:19.430177453 +0000 UTC m=+0.058957360 container exec_died 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:00:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:00:19 compute-0 sudo[268755]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:00:19 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:00:19 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:19 compute-0 sudo[269327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:00:19 compute-0 sudo[269327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:19 compute-0 sudo[269327]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:19 compute-0 sudo[269353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:00:19 compute-0 sudo[269353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:19 compute-0 nova_compute[253512]: 2025-11-25 10:00:19.772 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:20 compute-0 sudo[269353]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:00:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:00:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 10:00:20 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:00:20 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 580 B/s rd, 0 op/s
Nov 25 10:00:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 10:00:20 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 10:00:20 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 10:00:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:00:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 10:00:20 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:00:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:00:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:00:20 compute-0 sudo[269408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:00:20 compute-0 sudo[269408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:20 compute-0 sudo[269408]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:00:20] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 25 10:00:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:00:20] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 25 10:00:20 compute-0 sudo[269433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 10:00:20 compute-0 sudo[269433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:20.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:20.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:20 compute-0 podman[269490]: 2025-11-25 10:00:20.525366899 +0000 UTC m=+0.027232895 container create 391e60db504d9ca7dd42fbf01c3dbbe43c96ebe87defd1cd6f2f847a14794309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 10:00:20 compute-0 systemd[1]: Started libpod-conmon-391e60db504d9ca7dd42fbf01c3dbbe43c96ebe87defd1cd6f2f847a14794309.scope.
Nov 25 10:00:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:00:20 compute-0 podman[269490]: 2025-11-25 10:00:20.581454327 +0000 UTC m=+0.083320334 container init 391e60db504d9ca7dd42fbf01c3dbbe43c96ebe87defd1cd6f2f847a14794309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:00:20 compute-0 podman[269490]: 2025-11-25 10:00:20.58751805 +0000 UTC m=+0.089384035 container start 391e60db504d9ca7dd42fbf01c3dbbe43c96ebe87defd1cd6f2f847a14794309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:00:20 compute-0 podman[269490]: 2025-11-25 10:00:20.588702071 +0000 UTC m=+0.090568058 container attach 391e60db504d9ca7dd42fbf01c3dbbe43c96ebe87defd1cd6f2f847a14794309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 10:00:20 compute-0 musing_euler[269503]: 167 167
Nov 25 10:00:20 compute-0 systemd[1]: libpod-391e60db504d9ca7dd42fbf01c3dbbe43c96ebe87defd1cd6f2f847a14794309.scope: Deactivated successfully.
Nov 25 10:00:20 compute-0 podman[269490]: 2025-11-25 10:00:20.591160147 +0000 UTC m=+0.093026133 container died 391e60db504d9ca7dd42fbf01c3dbbe43c96ebe87defd1cd6f2f847a14794309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_euler, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 10:00:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-76b383cac9a8f726567bc55a8df84a2657990ae4113079e50ebe09ea6eccd2bd-merged.mount: Deactivated successfully.
Nov 25 10:00:20 compute-0 podman[269490]: 2025-11-25 10:00:20.608825313 +0000 UTC m=+0.110691299 container remove 391e60db504d9ca7dd42fbf01c3dbbe43c96ebe87defd1cd6f2f847a14794309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:00:20 compute-0 podman[269490]: 2025-11-25 10:00:20.51395919 +0000 UTC m=+0.015825197 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:00:20 compute-0 systemd[1]: libpod-conmon-391e60db504d9ca7dd42fbf01c3dbbe43c96ebe87defd1cd6f2f847a14794309.scope: Deactivated successfully.
Nov 25 10:00:20 compute-0 ceph-mon[74207]: pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:00:20 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:20 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:20 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:00:20 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:00:20 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:20 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:20 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:00:20 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:00:20 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:00:20 compute-0 podman[269525]: 2025-11-25 10:00:20.730359274 +0000 UTC m=+0.032304138 container create 5399bebea2b070816cf84df8589754071404c54c56eee9ceda188c462a02756a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_saha, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:00:20 compute-0 systemd[1]: Started libpod-conmon-5399bebea2b070816cf84df8589754071404c54c56eee9ceda188c462a02756a.scope.
Nov 25 10:00:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b86870bb0e12580dc59443b51e9bf834719b5b5218d467a006da8b8b4111e0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b86870bb0e12580dc59443b51e9bf834719b5b5218d467a006da8b8b4111e0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b86870bb0e12580dc59443b51e9bf834719b5b5218d467a006da8b8b4111e0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b86870bb0e12580dc59443b51e9bf834719b5b5218d467a006da8b8b4111e0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b86870bb0e12580dc59443b51e9bf834719b5b5218d467a006da8b8b4111e0f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:20 compute-0 podman[269525]: 2025-11-25 10:00:20.791681723 +0000 UTC m=+0.093626606 container init 5399bebea2b070816cf84df8589754071404c54c56eee9ceda188c462a02756a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 10:00:20 compute-0 podman[269525]: 2025-11-25 10:00:20.797466019 +0000 UTC m=+0.099410883 container start 5399bebea2b070816cf84df8589754071404c54c56eee9ceda188c462a02756a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_saha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:00:20 compute-0 podman[269525]: 2025-11-25 10:00:20.798668256 +0000 UTC m=+0.100613129 container attach 5399bebea2b070816cf84df8589754071404c54c56eee9ceda188c462a02756a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_saha, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:00:20 compute-0 podman[269525]: 2025-11-25 10:00:20.72000874 +0000 UTC m=+0.021953623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:00:20 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 25 10:00:21 compute-0 amazing_saha[269538]: --> passed data devices: 0 physical, 1 LVM
Nov 25 10:00:21 compute-0 amazing_saha[269538]: --> All data devices are unavailable
Nov 25 10:00:21 compute-0 systemd[1]: libpod-5399bebea2b070816cf84df8589754071404c54c56eee9ceda188c462a02756a.scope: Deactivated successfully.
Nov 25 10:00:21 compute-0 podman[269553]: 2025-11-25 10:00:21.078553395 +0000 UTC m=+0.016932536 container died 5399bebea2b070816cf84df8589754071404c54c56eee9ceda188c462a02756a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_saha, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 10:00:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b86870bb0e12580dc59443b51e9bf834719b5b5218d467a006da8b8b4111e0f-merged.mount: Deactivated successfully.
Nov 25 10:00:21 compute-0 podman[269553]: 2025-11-25 10:00:21.09940047 +0000 UTC m=+0.037779591 container remove 5399bebea2b070816cf84df8589754071404c54c56eee9ceda188c462a02756a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 10:00:21 compute-0 systemd[1]: libpod-conmon-5399bebea2b070816cf84df8589754071404c54c56eee9ceda188c462a02756a.scope: Deactivated successfully.
Nov 25 10:00:21 compute-0 nova_compute[253512]: 2025-11-25 10:00:21.114 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:21 compute-0 sudo[269433]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:21 compute-0 sudo[269565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:00:21 compute-0 sudo[269565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:21 compute-0 sudo[269565]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:21 compute-0 sudo[269590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 10:00:21 compute-0 sudo[269590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:00:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2509102504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:21 compute-0 podman[269647]: 2025-11-25 10:00:21.504818472 +0000 UTC m=+0.027050343 container create 35f77184bbf521e29fc92752a29a8bcb63cb439f36932e8f94cac88af143196f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 25 10:00:21 compute-0 systemd[1]: Started libpod-conmon-35f77184bbf521e29fc92752a29a8bcb63cb439f36932e8f94cac88af143196f.scope.
Nov 25 10:00:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:00:21 compute-0 podman[269647]: 2025-11-25 10:00:21.545391809 +0000 UTC m=+0.067623690 container init 35f77184bbf521e29fc92752a29a8bcb63cb439f36932e8f94cac88af143196f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 25 10:00:21 compute-0 podman[269647]: 2025-11-25 10:00:21.549913334 +0000 UTC m=+0.072145206 container start 35f77184bbf521e29fc92752a29a8bcb63cb439f36932e8f94cac88af143196f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:00:21 compute-0 podman[269647]: 2025-11-25 10:00:21.551062319 +0000 UTC m=+0.073294200 container attach 35f77184bbf521e29fc92752a29a8bcb63cb439f36932e8f94cac88af143196f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_rhodes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 10:00:21 compute-0 pensive_rhodes[269660]: 167 167
Nov 25 10:00:21 compute-0 systemd[1]: libpod-35f77184bbf521e29fc92752a29a8bcb63cb439f36932e8f94cac88af143196f.scope: Deactivated successfully.
Nov 25 10:00:21 compute-0 podman[269647]: 2025-11-25 10:00:21.55366103 +0000 UTC m=+0.075892900 container died 35f77184bbf521e29fc92752a29a8bcb63cb439f36932e8f94cac88af143196f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_rhodes, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 10:00:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-82deef430bd55fd7719f30fdfa304b3d44fbbc53482ad11f596835264cdd0fb7-merged.mount: Deactivated successfully.
Nov 25 10:00:21 compute-0 podman[269647]: 2025-11-25 10:00:21.571306309 +0000 UTC m=+0.093538180 container remove 35f77184bbf521e29fc92752a29a8bcb63cb439f36932e8f94cac88af143196f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:00:21 compute-0 podman[269647]: 2025-11-25 10:00:21.49362125 +0000 UTC m=+0.015853141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:00:21 compute-0 systemd[1]: libpod-conmon-35f77184bbf521e29fc92752a29a8bcb63cb439f36932e8f94cac88af143196f.scope: Deactivated successfully.
Nov 25 10:00:21 compute-0 ceph-mon[74207]: pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 580 B/s rd, 0 op/s
Nov 25 10:00:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4117408647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:21 compute-0 ceph-mon[74207]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 25 10:00:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2509102504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:21 compute-0 podman[269682]: 2025-11-25 10:00:21.692497474 +0000 UTC m=+0.029616961 container create 1f4ed80c7f6eb58e0203e7e83204eb9359405a82d92b3435c721877fdaedd82d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_jepsen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:00:21 compute-0 systemd[1]: Started libpod-conmon-1f4ed80c7f6eb58e0203e7e83204eb9359405a82d92b3435c721877fdaedd82d.scope.
Nov 25 10:00:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba2c1de46b15c4ce243fc39411a85f8d4594b7f3482d33c8e6ad6c33c6ed316/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba2c1de46b15c4ce243fc39411a85f8d4594b7f3482d33c8e6ad6c33c6ed316/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba2c1de46b15c4ce243fc39411a85f8d4594b7f3482d33c8e6ad6c33c6ed316/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba2c1de46b15c4ce243fc39411a85f8d4594b7f3482d33c8e6ad6c33c6ed316/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:21 compute-0 podman[269682]: 2025-11-25 10:00:21.74883336 +0000 UTC m=+0.085952868 container init 1f4ed80c7f6eb58e0203e7e83204eb9359405a82d92b3435c721877fdaedd82d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:00:21 compute-0 podman[269682]: 2025-11-25 10:00:21.755191198 +0000 UTC m=+0.092310685 container start 1f4ed80c7f6eb58e0203e7e83204eb9359405a82d92b3435c721877fdaedd82d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:00:21 compute-0 podman[269682]: 2025-11-25 10:00:21.756202024 +0000 UTC m=+0.093321510 container attach 1f4ed80c7f6eb58e0203e7e83204eb9359405a82d92b3435c721877fdaedd82d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_jepsen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:00:21 compute-0 podman[269682]: 2025-11-25 10:00:21.681843668 +0000 UTC m=+0.018963154 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:00:21 compute-0 kind_jepsen[269697]: {
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:     "1": [
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:         {
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             "devices": [
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "/dev/loop3"
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             ],
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             "lv_name": "ceph_lv0",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             "lv_size": "21470642176",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             "name": "ceph_lv0",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             "tags": {
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.cluster_name": "ceph",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.crush_device_class": "",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.encrypted": "0",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.osd_id": "1",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.type": "block",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.vdo": "0",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:                 "ceph.with_tpm": "0"
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             },
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             "type": "block",
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:             "vg_name": "ceph_vg0"
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:         }
Nov 25 10:00:21 compute-0 kind_jepsen[269697]:     ]
Nov 25 10:00:21 compute-0 kind_jepsen[269697]: }
Nov 25 10:00:21 compute-0 systemd[1]: libpod-1f4ed80c7f6eb58e0203e7e83204eb9359405a82d92b3435c721877fdaedd82d.scope: Deactivated successfully.
Nov 25 10:00:21 compute-0 podman[269682]: 2025-11-25 10:00:21.984208 +0000 UTC m=+0.321327487 container died 1f4ed80c7f6eb58e0203e7e83204eb9359405a82d92b3435c721877fdaedd82d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:00:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-fba2c1de46b15c4ce243fc39411a85f8d4594b7f3482d33c8e6ad6c33c6ed316-merged.mount: Deactivated successfully.
Nov 25 10:00:22 compute-0 podman[269682]: 2025-11-25 10:00:22.006117157 +0000 UTC m=+0.343236644 container remove 1f4ed80c7f6eb58e0203e7e83204eb9359405a82d92b3435c721877fdaedd82d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 25 10:00:22 compute-0 systemd[1]: libpod-conmon-1f4ed80c7f6eb58e0203e7e83204eb9359405a82d92b3435c721877fdaedd82d.scope: Deactivated successfully.
Nov 25 10:00:22 compute-0 sudo[269590]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:22 compute-0 sudo[269717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:00:22 compute-0 sudo[269717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:22 compute-0 sudo[269717]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:22 compute-0 sudo[269742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 10:00:22 compute-0 sudo[269742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:22 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v875: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 580 B/s rd, 0 op/s
Nov 25 10:00:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.002000020s ======
Nov 25 10:00:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:22.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Nov 25 10:00:22 compute-0 podman[269797]: 2025-11-25 10:00:22.410075777 +0000 UTC m=+0.028840388 container create 3dfcc3454113b79df91114f997f8fc6efd9b28dd8b5afc470e85657b16737033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_davinci, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:00:22 compute-0 systemd[1]: Started libpod-conmon-3dfcc3454113b79df91114f997f8fc6efd9b28dd8b5afc470e85657b16737033.scope.
Nov 25 10:00:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:00:22 compute-0 podman[269797]: 2025-11-25 10:00:22.471087579 +0000 UTC m=+0.089852190 container init 3dfcc3454113b79df91114f997f8fc6efd9b28dd8b5afc470e85657b16737033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_davinci, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 10:00:22 compute-0 podman[269797]: 2025-11-25 10:00:22.476622524 +0000 UTC m=+0.095387136 container start 3dfcc3454113b79df91114f997f8fc6efd9b28dd8b5afc470e85657b16737033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_davinci, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:00:22 compute-0 podman[269797]: 2025-11-25 10:00:22.478034036 +0000 UTC m=+0.096798647 container attach 3dfcc3454113b79df91114f997f8fc6efd9b28dd8b5afc470e85657b16737033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_davinci, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:00:22 compute-0 charming_davinci[269810]: 167 167
Nov 25 10:00:22 compute-0 systemd[1]: libpod-3dfcc3454113b79df91114f997f8fc6efd9b28dd8b5afc470e85657b16737033.scope: Deactivated successfully.
Nov 25 10:00:22 compute-0 conmon[269810]: conmon 3dfcc3454113b79df911 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3dfcc3454113b79df91114f997f8fc6efd9b28dd8b5afc470e85657b16737033.scope/container/memory.events
Nov 25 10:00:22 compute-0 podman[269797]: 2025-11-25 10:00:22.481087414 +0000 UTC m=+0.099852024 container died 3dfcc3454113b79df91114f997f8fc6efd9b28dd8b5afc470e85657b16737033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-95eec06cfa93e99c994c81b9d1a80e7f14d18eea75d4c2f07fa9dfc673e94473-merged.mount: Deactivated successfully.
Nov 25 10:00:22 compute-0 podman[269797]: 2025-11-25 10:00:22.398765863 +0000 UTC m=+0.017530474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:00:22 compute-0 podman[269797]: 2025-11-25 10:00:22.499785637 +0000 UTC m=+0.118550248 container remove 3dfcc3454113b79df91114f997f8fc6efd9b28dd8b5afc470e85657b16737033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_davinci, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:00:22 compute-0 systemd[1]: libpod-conmon-3dfcc3454113b79df91114f997f8fc6efd9b28dd8b5afc470e85657b16737033.scope: Deactivated successfully.
Nov 25 10:00:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:22.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:22 compute-0 podman[269833]: 2025-11-25 10:00:22.626695595 +0000 UTC m=+0.030079514 container create 0e344663957761d0c57beaa69d9402ea2a4c2fcf7596358d2b4124919bcbad6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:00:22 compute-0 systemd[1]: Started libpod-conmon-0e344663957761d0c57beaa69d9402ea2a4c2fcf7596358d2b4124919bcbad6b.scope.
Nov 25 10:00:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842d0eec8c6928c459c7ea2588bcce97d8ec71785425d9b1fc3f4e061d80002c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842d0eec8c6928c459c7ea2588bcce97d8ec71785425d9b1fc3f4e061d80002c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842d0eec8c6928c459c7ea2588bcce97d8ec71785425d9b1fc3f4e061d80002c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842d0eec8c6928c459c7ea2588bcce97d8ec71785425d9b1fc3f4e061d80002c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:00:22 compute-0 podman[269833]: 2025-11-25 10:00:22.679446552 +0000 UTC m=+0.082830491 container init 0e344663957761d0c57beaa69d9402ea2a4c2fcf7596358d2b4124919bcbad6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lovelace, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:00:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/533156285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:22 compute-0 podman[269833]: 2025-11-25 10:00:22.684921255 +0000 UTC m=+0.088305174 container start 0e344663957761d0c57beaa69d9402ea2a4c2fcf7596358d2b4124919bcbad6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:00:22 compute-0 podman[269833]: 2025-11-25 10:00:22.687803759 +0000 UTC m=+0.091187688 container attach 0e344663957761d0c57beaa69d9402ea2a4c2fcf7596358d2b4124919bcbad6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:00:22 compute-0 podman[269833]: 2025-11-25 10:00:22.615297644 +0000 UTC m=+0.018681584 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:00:22 compute-0 sudo[269851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:00:22 compute-0 sudo[269851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:22 compute-0 sudo[269851]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:23 compute-0 lvm[269948]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:00:23 compute-0 lvm[269948]: VG ceph_vg0 finished
Nov 25 10:00:23 compute-0 optimistic_lovelace[269846]: {}
Nov 25 10:00:23 compute-0 systemd[1]: libpod-0e344663957761d0c57beaa69d9402ea2a4c2fcf7596358d2b4124919bcbad6b.scope: Deactivated successfully.
Nov 25 10:00:23 compute-0 podman[269833]: 2025-11-25 10:00:23.200118554 +0000 UTC m=+0.603502473 container died 0e344663957761d0c57beaa69d9402ea2a4c2fcf7596358d2b4124919bcbad6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 25 10:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-842d0eec8c6928c459c7ea2588bcce97d8ec71785425d9b1fc3f4e061d80002c-merged.mount: Deactivated successfully.
Nov 25 10:00:23 compute-0 podman[269833]: 2025-11-25 10:00:23.223740932 +0000 UTC m=+0.627124851 container remove 0e344663957761d0c57beaa69d9402ea2a4c2fcf7596358d2b4124919bcbad6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lovelace, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:00:23 compute-0 systemd[1]: libpod-conmon-0e344663957761d0c57beaa69d9402ea2a4c2fcf7596358d2b4124919bcbad6b.scope: Deactivated successfully.
Nov 25 10:00:23 compute-0 sudo[269742]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:00:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:00:23 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:23 compute-0 sudo[269958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:00:23 compute-0 sudo[269958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:23 compute-0 sudo[269958]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:23 compute-0 ceph-mon[74207]: pgmap v875: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 580 B/s rd, 0 op/s
Nov 25 10:00:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:23 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:24 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v876: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 580 B/s rd, 0 op/s
Nov 25 10:00:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:24.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:24 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:00:24.464 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:00:24 compute-0 nova_compute[253512]: 2025-11-25 10:00:24.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:24.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:24 compute-0 nova_compute[253512]: 2025-11-25 10:00:24.773 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:25 compute-0 nova_compute[253512]: 2025-11-25 10:00:25.467 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:25 compute-0 nova_compute[253512]: 2025-11-25 10:00:25.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:25 compute-0 nova_compute[253512]: 2025-11-25 10:00:25.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:25 compute-0 nova_compute[253512]: 2025-11-25 10:00:25.485 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:25 compute-0 nova_compute[253512]: 2025-11-25 10:00:25.486 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:25 compute-0 nova_compute[253512]: 2025-11-25 10:00:25.486 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:25 compute-0 nova_compute[253512]: 2025-11-25 10:00:25.486 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:00:25 compute-0 nova_compute[253512]: 2025-11-25 10:00:25.486 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:00:25 compute-0 ceph-mon[74207]: pgmap v876: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 580 B/s rd, 0 op/s
Nov 25 10:00:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3339167886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:00:25 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4105020736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:25 compute-0 nova_compute[253512]: 2025-11-25 10:00:25.818 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.002 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.003 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4577MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.003 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.003 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.043 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.043 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.055 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.117 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:26 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v877: 337 pgs: 337 active+clean; 67 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 902 KiB/s wr, 15 op/s
Nov 25 10:00:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:26.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:00:26 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/455567082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.378 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.381 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.392 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.409 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:00:26 compute-0 nova_compute[253512]: 2025-11-25 10:00:26.409 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:00:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:26.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4105020736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/121840190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/455567082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:27.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:27.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:27.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:27.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:27 compute-0 nova_compute[253512]: 2025-11-25 10:00:27.410 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:27 compute-0 nova_compute[253512]: 2025-11-25 10:00:27.410 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:27 compute-0 ceph-mon[74207]: pgmap v877: 337 pgs: 337 active+clean; 67 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 902 KiB/s wr, 15 op/s
Nov 25 10:00:27 compute-0 podman[270034]: 2025-11-25 10:00:27.975758736 +0000 UTC m=+0.035699691 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:00:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:28 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v878: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 25 10:00:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:28.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:28 compute-0 nova_compute[253512]: 2025-11-25 10:00:28.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:28 compute-0 nova_compute[253512]: 2025-11-25 10:00:28.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:00:28 compute-0 nova_compute[253512]: 2025-11-25 10:00:28.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:00:28 compute-0 nova_compute[253512]: 2025-11-25 10:00:28.480 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:00:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:28.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4163467953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:00:29 compute-0 nova_compute[253512]: 2025-11-25 10:00:29.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:29 compute-0 nova_compute[253512]: 2025-11-25 10:00:29.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:00:29 compute-0 ceph-mon[74207]: pgmap v878: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 25 10:00:29 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4112314602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:00:29 compute-0 nova_compute[253512]: 2025-11-25 10:00:29.774 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 25 10:00:29 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:00:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:00:30 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v879: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 25 10:00:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:00:30] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Nov 25 10:00:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:00:30] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Nov 25 10:00:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:30.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:30 compute-0 nova_compute[253512]: 2025-11-25 10:00:30.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:30.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:00:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:00:30 compute-0 ceph-mon[74207]: pgmap v879: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 25 10:00:31 compute-0 nova_compute[253512]: 2025-11-25 10:00:31.119 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:31 compute-0 nova_compute[253512]: 2025-11-25 10:00:31.467 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:00:32 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v880: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 10:00:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:32.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:32.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:33 compute-0 ceph-mon[74207]: pgmap v880: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 10:00:34 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v881: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 10:00:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:34.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:34.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:34 compute-0 nova_compute[253512]: 2025-11-25 10:00:34.776 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:34 compute-0 podman[270056]: 2025-11-25 10:00:34.996409099 +0000 UTC m=+0.054070527 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 10:00:35 compute-0 ceph-mon[74207]: pgmap v881: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 10:00:36 compute-0 nova_compute[253512]: 2025-11-25 10:00:36.121 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:36 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v882: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 10:00:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:36.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:36.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:37.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:37.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:37.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:37.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:37 compute-0 ceph-mon[74207]: pgmap v882: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 10:00:37 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:38 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v883: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 88 op/s
Nov 25 10:00:38 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/928370122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:00:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:38.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:38.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:39 compute-0 ceph-mon[74207]: pgmap v883: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 88 op/s
Nov 25 10:00:39 compute-0 nova_compute[253512]: 2025-11-25 10:00:39.777 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:40 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v884: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 10:00:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:00:40] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Nov 25 10:00:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:00:40] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Nov 25 10:00:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:40.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:40.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:40 compute-0 podman[270085]: 2025-11-25 10:00:40.976395018 +0000 UTC m=+0.040854098 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:00:41 compute-0 nova_compute[253512]: 2025-11-25 10:00:41.122 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:41 compute-0 ceph-mon[74207]: pgmap v884: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 10:00:41 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2516742035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:00:41 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2678682281' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:00:42 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v885: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Nov 25 10:00:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:42.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:42.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:42 compute-0 sudo[270105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:00:42 compute-0 sudo[270105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:00:42 compute-0 sudo[270105]: pam_unix(sudo:session): session closed for user root
Nov 25 10:00:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:43 compute-0 ceph-mon[74207]: pgmap v885: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Nov 25 10:00:44 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v886: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 194 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Nov 25 10:00:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:44.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:44.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:44 compute-0 nova_compute[253512]: 2025-11-25 10:00:44.779 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_10:00:44
Nov 25 10:00:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:00:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 10:00:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'volumes', '.nfs', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'vms', '.mgr']
Nov 25 10:00:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 10:00:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:00:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:00:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:00:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:00:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:00:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:00:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:00:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:00:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:00:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:00:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:00:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:00:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:00:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:00:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:00:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:00:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:00:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:00:45 compute-0 ceph-mon[74207]: pgmap v886: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 194 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Nov 25 10:00:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:00:45 compute-0 ovn_controller[155020]: 2025-11-25T10:00:45Z|00106|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Nov 25 10:00:46 compute-0 nova_compute[253512]: 2025-11-25 10:00:46.124 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:46 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v887: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 134 op/s
Nov 25 10:00:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:46.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:46.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:47.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:47.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:47.072Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:47.072Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:47 compute-0 ceph-mon[74207]: pgmap v887: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 134 op/s
Nov 25 10:00:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:48 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v888: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Nov 25 10:00:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:48.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:48.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:49 compute-0 ceph-mon[74207]: pgmap v888: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Nov 25 10:00:49 compute-0 nova_compute[253512]: 2025-11-25 10:00:49.780 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:50 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v889: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Nov 25 10:00:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:00:50] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Nov 25 10:00:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:00:50] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Nov 25 10:00:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:50.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:50.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:51 compute-0 nova_compute[253512]: 2025-11-25 10:00:51.125 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:51 compute-0 ceph-mon[74207]: pgmap v889: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Nov 25 10:00:52 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v890: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Nov 25 10:00:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:52.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:52.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:53 compute-0 ceph-mon[74207]: pgmap v890: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Nov 25 10:00:54 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v891: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 74 op/s
Nov 25 10:00:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3267203020' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:00:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3267203020' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:00:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:54.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:54.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:54 compute-0 nova_compute[253512]: 2025-11-25 10:00:54.781 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:55 compute-0 ceph-mon[74207]: pgmap v891: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 74 op/s
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011071142231555643 of space, bias 1.0, pg target 0.3321342669466693 quantized to 32 (current 32)
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:00:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:00:56 compute-0 nova_compute[253512]: 2025-11-25 10:00:56.128 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:56 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v892: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Nov 25 10:00:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:56.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:56.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:57.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:57.070Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:57.070Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:00:57.070Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:00:57 compute-0 ceph-mon[74207]: pgmap v892: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Nov 25 10:00:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:00:58 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v893: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 954 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Nov 25 10:00:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:00:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:58.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:00:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:00:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:00:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:58.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:00:58 compute-0 podman[270147]: 2025-11-25 10:00:58.970496584 +0000 UTC m=+0.036930440 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 10:00:59 compute-0 ceph-mon[74207]: pgmap v893: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 954 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Nov 25 10:00:59 compute-0 nova_compute[253512]: 2025-11-25 10:00:59.783 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:00:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:00:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:01:00 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v894: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 10:01:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:01:00] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Nov 25 10:01:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:01:00] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Nov 25 10:01:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:00.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:01:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:00.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:01 compute-0 nova_compute[253512]: 2025-11-25 10:01:01.129 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:01 compute-0 CROND[270166]: (root) CMD (run-parts /etc/cron.hourly)
Nov 25 10:01:01 compute-0 run-parts[270169]: (/etc/cron.hourly) starting 0anacron
Nov 25 10:01:01 compute-0 run-parts[270175]: (/etc/cron.hourly) finished 0anacron
Nov 25 10:01:01 compute-0 CROND[270165]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 25 10:01:01 compute-0 ceph-mon[74207]: pgmap v894: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 10:01:02 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v895: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 25 10:01:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:02.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:02.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:02 compute-0 sudo[270178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:01:02 compute-0 sudo[270178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:02 compute-0 sudo[270178]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:03 compute-0 ceph-mon[74207]: pgmap v895: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 25 10:01:04 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v896: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 10:01:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:04.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:04.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:04 compute-0 nova_compute[253512]: 2025-11-25 10:01:04.783 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:05 compute-0 ceph-mon[74207]: pgmap v896: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 10:01:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:01:05.389 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:01:05.389 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:01:05.389 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:05 compute-0 podman[270207]: 2025-11-25 10:01:05.994416924 +0000 UTC m=+0.054882267 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:01:06 compute-0 nova_compute[253512]: 2025-11-25 10:01:06.130 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:06 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v897: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 25 10:01:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:06.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:06.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:07.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:07.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:07.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:07.072Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:07 compute-0 ceph-mon[74207]: pgmap v897: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 25 10:01:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:08 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v898: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 1 op/s
Nov 25 10:01:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 10:01:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:08.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 10:01:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:08.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:09 compute-0 ceph-mon[74207]: pgmap v898: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 1 op/s
Nov 25 10:01:09 compute-0 nova_compute[253512]: 2025-11-25 10:01:09.784 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:10 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v899: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 1 op/s
Nov 25 10:01:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:01:10] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Nov 25 10:01:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:01:10] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Nov 25 10:01:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 10:01:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:10.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 10:01:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1682084571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:10.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:11 compute-0 nova_compute[253512]: 2025-11-25 10:01:11.132 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:11 compute-0 ceph-mon[74207]: pgmap v899: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 1 op/s
Nov 25 10:01:11 compute-0 podman[270236]: 2025-11-25 10:01:11.977386891 +0000 UTC m=+0.039062489 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 10:01:12 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v900: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 20 KiB/s wr, 29 op/s
Nov 25 10:01:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:12.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:12.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:13 compute-0 ceph-mon[74207]: pgmap v900: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 20 KiB/s wr, 29 op/s
Nov 25 10:01:14 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v901: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 7.8 KiB/s wr, 28 op/s
Nov 25 10:01:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:14.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:14.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:14 compute-0 nova_compute[253512]: 2025-11-25 10:01:14.786 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:01:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:01:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:01:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:01:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:01:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:01:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:01:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:01:15 compute-0 ceph-mon[74207]: pgmap v901: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 7.8 KiB/s wr, 28 op/s
Nov 25 10:01:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:01:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2852511518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:16 compute-0 nova_compute[253512]: 2025-11-25 10:01:16.135 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:16 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v902: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 11 KiB/s wr, 57 op/s
Nov 25 10:01:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 10:01:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:16.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 10:01:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:16.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:17.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:17.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:17.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:17.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:17 compute-0 ceph-mon[74207]: pgmap v902: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 11 KiB/s wr, 57 op/s
Nov 25 10:01:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:01:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5875 writes, 26K keys, 5875 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 5875 writes, 5875 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1580 writes, 6711 keys, 1580 commit groups, 1.0 writes per commit group, ingest: 11.35 MB, 0.02 MB/s
                                           Interval WAL: 1580 writes, 1580 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    394.1      0.10              0.06        14    0.007       0      0       0.0       0.0
                                             L6      1/0   11.52 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.1    508.3    433.8      0.37              0.26        13    0.029     66K   6862       0.0       0.0
                                            Sum      1/0   11.52 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.1    401.0    425.4      0.47              0.32        27    0.017     66K   6862       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.1    428.1    418.6      0.17              0.12        10    0.017     29K   2533       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    508.3    433.8      0.37              0.26        13    0.029     66K   6862       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    400.2      0.10              0.06        13    0.008       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     27.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.038, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.11 MB/s write, 0.18 GB read, 0.10 MB/s read, 0.5 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e6ae573350#2 capacity: 304.00 MB usage: 15.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 8.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(814,15.07 MB,4.95782%) FilterBlock(28,196.11 KB,0.0629977%) IndexBlock(28,343.12 KB,0.110225%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 10:01:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:18 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v903: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 56 op/s
Nov 25 10:01:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:18.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:18.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:18.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:18.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:18.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:19 compute-0 ceph-mon[74207]: pgmap v903: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 56 op/s
Nov 25 10:01:19 compute-0 nova_compute[253512]: 2025-11-25 10:01:19.788 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:20 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v904: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 56 op/s
Nov 25 10:01:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:01:20] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Nov 25 10:01:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:01:20] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Nov 25 10:01:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:20.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:20.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:21 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:01:21.012 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:01:21 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:01:21.013 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:01:21 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:01:21.013 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:01:21 compute-0 nova_compute[253512]: 2025-11-25 10:01:21.013 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:21 compute-0 nova_compute[253512]: 2025-11-25 10:01:21.136 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:21 compute-0 ceph-mon[74207]: pgmap v904: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 56 op/s
Nov 25 10:01:22 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v905: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 56 op/s
Nov 25 10:01:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:22.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:22.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:23 compute-0 sudo[270263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:01:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:23 compute-0 sudo[270263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:23 compute-0 sudo[270263]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:23 compute-0 ceph-mon[74207]: pgmap v905: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 56 op/s
Nov 25 10:01:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/840095353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1525438121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:23 compute-0 sudo[270288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:01:23 compute-0 sudo[270288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:23 compute-0 sudo[270288]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:23 compute-0 sudo[270313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:01:23 compute-0 sudo[270313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:23 compute-0 sudo[270313]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:01:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:01:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 10:01:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:01:24 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v906: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Nov 25 10:01:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 10:01:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:01:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 10:01:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:01:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 10:01:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:01:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 10:01:24 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:01:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:01:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:01:24 compute-0 sudo[270369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:01:24 compute-0 sudo[270369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:24 compute-0 sudo[270369]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:24 compute-0 sudo[270394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 10:01:24 compute-0 sudo[270394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:01:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:24.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:01:24 compute-0 podman[270450]: 2025-11-25 10:01:24.392863779 +0000 UTC m=+0.031415612 container create 5f927b245cb3e7e8331d29862d326684a67b0d9131a82e62d574cfc5c5d6eb43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 25 10:01:24 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:01:24 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:01:24 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:01:24 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:01:24 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:01:24 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:01:24 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:01:24 compute-0 systemd[1]: Started libpod-conmon-5f927b245cb3e7e8331d29862d326684a67b0d9131a82e62d574cfc5c5d6eb43.scope.
Nov 25 10:01:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:01:24 compute-0 podman[270450]: 2025-11-25 10:01:24.45067063 +0000 UTC m=+0.089222483 container init 5f927b245cb3e7e8331d29862d326684a67b0d9131a82e62d574cfc5c5d6eb43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lederberg, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:01:24 compute-0 podman[270450]: 2025-11-25 10:01:24.455530121 +0000 UTC m=+0.094081965 container start 5f927b245cb3e7e8331d29862d326684a67b0d9131a82e62d574cfc5c5d6eb43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lederberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:01:24 compute-0 podman[270450]: 2025-11-25 10:01:24.456790117 +0000 UTC m=+0.095341950 container attach 5f927b245cb3e7e8331d29862d326684a67b0d9131a82e62d574cfc5c5d6eb43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:01:24 compute-0 admiring_lederberg[270463]: 167 167
Nov 25 10:01:24 compute-0 systemd[1]: libpod-5f927b245cb3e7e8331d29862d326684a67b0d9131a82e62d574cfc5c5d6eb43.scope: Deactivated successfully.
Nov 25 10:01:24 compute-0 podman[270450]: 2025-11-25 10:01:24.460507806 +0000 UTC m=+0.099059639 container died 5f927b245cb3e7e8331d29862d326684a67b0d9131a82e62d574cfc5c5d6eb43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:01:24 compute-0 nova_compute[253512]: 2025-11-25 10:01:24.470 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-02ca5471e93d44f3573f5d2d466d928da8c2ff9045bd9149857c4ca46ac15e62-merged.mount: Deactivated successfully.
Nov 25 10:01:24 compute-0 podman[270450]: 2025-11-25 10:01:24.380710375 +0000 UTC m=+0.019262228 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:01:24 compute-0 podman[270450]: 2025-11-25 10:01:24.481618398 +0000 UTC m=+0.120170231 container remove 5f927b245cb3e7e8331d29862d326684a67b0d9131a82e62d574cfc5c5d6eb43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:01:24 compute-0 systemd[1]: libpod-conmon-5f927b245cb3e7e8331d29862d326684a67b0d9131a82e62d574cfc5c5d6eb43.scope: Deactivated successfully.
Nov 25 10:01:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:24.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:24 compute-0 podman[270485]: 2025-11-25 10:01:24.599681516 +0000 UTC m=+0.027820654 container create 44863d0cbbffd429320519c4d4ebe0ab5611de28a0080691e40b08fbd0fdc499 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_galileo, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:01:24 compute-0 systemd[1]: Started libpod-conmon-44863d0cbbffd429320519c4d4ebe0ab5611de28a0080691e40b08fbd0fdc499.scope.
Nov 25 10:01:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64932493ddf9d07eeced24915f1e2d3c20cac59c740f1ee0e94dc30e3305cc09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64932493ddf9d07eeced24915f1e2d3c20cac59c740f1ee0e94dc30e3305cc09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64932493ddf9d07eeced24915f1e2d3c20cac59c740f1ee0e94dc30e3305cc09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64932493ddf9d07eeced24915f1e2d3c20cac59c740f1ee0e94dc30e3305cc09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64932493ddf9d07eeced24915f1e2d3c20cac59c740f1ee0e94dc30e3305cc09/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:24 compute-0 podman[270485]: 2025-11-25 10:01:24.659753697 +0000 UTC m=+0.087892845 container init 44863d0cbbffd429320519c4d4ebe0ab5611de28a0080691e40b08fbd0fdc499 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_galileo, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:01:24 compute-0 podman[270485]: 2025-11-25 10:01:24.663828971 +0000 UTC m=+0.091968109 container start 44863d0cbbffd429320519c4d4ebe0ab5611de28a0080691e40b08fbd0fdc499 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 25 10:01:24 compute-0 podman[270485]: 2025-11-25 10:01:24.665070873 +0000 UTC m=+0.093210011 container attach 44863d0cbbffd429320519c4d4ebe0ab5611de28a0080691e40b08fbd0fdc499 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_galileo, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:01:24 compute-0 podman[270485]: 2025-11-25 10:01:24.589004085 +0000 UTC m=+0.017143223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:01:24 compute-0 nova_compute[253512]: 2025-11-25 10:01:24.790 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:24 compute-0 clever_galileo[270499]: --> passed data devices: 0 physical, 1 LVM
Nov 25 10:01:24 compute-0 clever_galileo[270499]: --> All data devices are unavailable
Nov 25 10:01:24 compute-0 systemd[1]: libpod-44863d0cbbffd429320519c4d4ebe0ab5611de28a0080691e40b08fbd0fdc499.scope: Deactivated successfully.
Nov 25 10:01:24 compute-0 podman[270485]: 2025-11-25 10:01:24.926143372 +0000 UTC m=+0.354282510 container died 44863d0cbbffd429320519c4d4ebe0ab5611de28a0080691e40b08fbd0fdc499 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:01:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-64932493ddf9d07eeced24915f1e2d3c20cac59c740f1ee0e94dc30e3305cc09-merged.mount: Deactivated successfully.
Nov 25 10:01:24 compute-0 podman[270485]: 2025-11-25 10:01:24.947499037 +0000 UTC m=+0.375638174 container remove 44863d0cbbffd429320519c4d4ebe0ab5611de28a0080691e40b08fbd0fdc499 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:01:24 compute-0 systemd[1]: libpod-conmon-44863d0cbbffd429320519c4d4ebe0ab5611de28a0080691e40b08fbd0fdc499.scope: Deactivated successfully.
Nov 25 10:01:24 compute-0 sudo[270394]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:25 compute-0 sudo[270523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:01:25 compute-0 sudo[270523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:25 compute-0 sudo[270523]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:25 compute-0 sudo[270548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 10:01:25 compute-0 sudo[270548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:25 compute-0 podman[270604]: 2025-11-25 10:01:25.351870625 +0000 UTC m=+0.027251893 container create 794796adddd7b0aa7cd2d320f6c4400d0451a3de887aa3170ae9ae53a7739b98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_chatelet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:01:25 compute-0 systemd[1]: Started libpod-conmon-794796adddd7b0aa7cd2d320f6c4400d0451a3de887aa3170ae9ae53a7739b98.scope.
Nov 25 10:01:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:01:25 compute-0 podman[270604]: 2025-11-25 10:01:25.399531305 +0000 UTC m=+0.074912593 container init 794796adddd7b0aa7cd2d320f6c4400d0451a3de887aa3170ae9ae53a7739b98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:01:25 compute-0 podman[270604]: 2025-11-25 10:01:25.404533777 +0000 UTC m=+0.079915044 container start 794796adddd7b0aa7cd2d320f6c4400d0451a3de887aa3170ae9ae53a7739b98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_chatelet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 10:01:25 compute-0 podman[270604]: 2025-11-25 10:01:25.405857643 +0000 UTC m=+0.081238911 container attach 794796adddd7b0aa7cd2d320f6c4400d0451a3de887aa3170ae9ae53a7739b98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 25 10:01:25 compute-0 stoic_chatelet[270617]: 167 167
Nov 25 10:01:25 compute-0 systemd[1]: libpod-794796adddd7b0aa7cd2d320f6c4400d0451a3de887aa3170ae9ae53a7739b98.scope: Deactivated successfully.
Nov 25 10:01:25 compute-0 podman[270604]: 2025-11-25 10:01:25.408037182 +0000 UTC m=+0.083418449 container died 794796adddd7b0aa7cd2d320f6c4400d0451a3de887aa3170ae9ae53a7739b98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_chatelet, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 10:01:25 compute-0 ceph-mon[74207]: pgmap v906: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Nov 25 10:01:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d411a57521c0c4e32702ad79421d450a7949771f798ebf760c834aab9aa735b4-merged.mount: Deactivated successfully.
Nov 25 10:01:25 compute-0 podman[270604]: 2025-11-25 10:01:25.430161916 +0000 UTC m=+0.105543185 container remove 794796adddd7b0aa7cd2d320f6c4400d0451a3de887aa3170ae9ae53a7739b98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_chatelet, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 10:01:25 compute-0 podman[270604]: 2025-11-25 10:01:25.341105527 +0000 UTC m=+0.016486815 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:01:25 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:01:25 compute-0 systemd[1]: libpod-conmon-794796adddd7b0aa7cd2d320f6c4400d0451a3de887aa3170ae9ae53a7739b98.scope: Deactivated successfully.
Nov 25 10:01:25 compute-0 nova_compute[253512]: 2025-11-25 10:01:25.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:25 compute-0 nova_compute[253512]: 2025-11-25 10:01:25.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:25 compute-0 nova_compute[253512]: 2025-11-25 10:01:25.495 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:25 compute-0 nova_compute[253512]: 2025-11-25 10:01:25.495 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:25 compute-0 nova_compute[253512]: 2025-11-25 10:01:25.495 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:25 compute-0 nova_compute[253512]: 2025-11-25 10:01:25.495 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:01:25 compute-0 nova_compute[253512]: 2025-11-25 10:01:25.495 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:01:25 compute-0 podman[270640]: 2025-11-25 10:01:25.555358453 +0000 UTC m=+0.030899739 container create 8c654eda116bdc94efad42cfbbd214f77ead476d9c54357467e154011a531ad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:01:25 compute-0 systemd[1]: Started libpod-conmon-8c654eda116bdc94efad42cfbbd214f77ead476d9c54357467e154011a531ad5.scope.
Nov 25 10:01:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36acbcbb5f56808ce67c6013578cf582543dac60aabdaa4900904b073d44ab64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36acbcbb5f56808ce67c6013578cf582543dac60aabdaa4900904b073d44ab64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36acbcbb5f56808ce67c6013578cf582543dac60aabdaa4900904b073d44ab64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36acbcbb5f56808ce67c6013578cf582543dac60aabdaa4900904b073d44ab64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:25 compute-0 podman[270640]: 2025-11-25 10:01:25.606015113 +0000 UTC m=+0.081556399 container init 8c654eda116bdc94efad42cfbbd214f77ead476d9c54357467e154011a531ad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_newton, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:01:25 compute-0 podman[270640]: 2025-11-25 10:01:25.611106561 +0000 UTC m=+0.086647847 container start 8c654eda116bdc94efad42cfbbd214f77ead476d9c54357467e154011a531ad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_newton, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:01:25 compute-0 podman[270640]: 2025-11-25 10:01:25.614920123 +0000 UTC m=+0.090461408 container attach 8c654eda116bdc94efad42cfbbd214f77ead476d9c54357467e154011a531ad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 10:01:25 compute-0 podman[270640]: 2025-11-25 10:01:25.544089026 +0000 UTC m=+0.019630332 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:01:25 compute-0 pedantic_newton[270654]: {
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:     "1": [
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:         {
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             "devices": [
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "/dev/loop3"
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             ],
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             "lv_name": "ceph_lv0",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             "lv_size": "21470642176",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             "name": "ceph_lv0",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             "tags": {
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.cluster_name": "ceph",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.crush_device_class": "",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.encrypted": "0",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.osd_id": "1",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.type": "block",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.vdo": "0",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:                 "ceph.with_tpm": "0"
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             },
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             "type": "block",
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:             "vg_name": "ceph_vg0"
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:         }
Nov 25 10:01:25 compute-0 pedantic_newton[270654]:     ]
Nov 25 10:01:25 compute-0 pedantic_newton[270654]: }
Nov 25 10:01:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:01:25 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979500548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:25 compute-0 systemd[1]: libpod-8c654eda116bdc94efad42cfbbd214f77ead476d9c54357467e154011a531ad5.scope: Deactivated successfully.
Nov 25 10:01:25 compute-0 podman[270640]: 2025-11-25 10:01:25.849730998 +0000 UTC m=+0.325272294 container died 8c654eda116bdc94efad42cfbbd214f77ead476d9c54357467e154011a531ad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_newton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Nov 25 10:01:25 compute-0 nova_compute[253512]: 2025-11-25 10:01:25.856 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:01:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-36acbcbb5f56808ce67c6013578cf582543dac60aabdaa4900904b073d44ab64-merged.mount: Deactivated successfully.
Nov 25 10:01:25 compute-0 podman[270640]: 2025-11-25 10:01:25.875941395 +0000 UTC m=+0.351482681 container remove 8c654eda116bdc94efad42cfbbd214f77ead476d9c54357467e154011a531ad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 10:01:25 compute-0 systemd[1]: libpod-conmon-8c654eda116bdc94efad42cfbbd214f77ead476d9c54357467e154011a531ad5.scope: Deactivated successfully.
Nov 25 10:01:25 compute-0 sudo[270548]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:25 compute-0 sudo[270694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:01:25 compute-0 sudo[270694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:25 compute-0 sudo[270694]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:26 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v907: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Nov 25 10:01:26 compute-0 sudo[270720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 10:01:26 compute-0 sudo[270720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.100 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.101 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4591MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.102 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.102 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.138 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.148 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.148 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.167 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:01:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:26.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:26 compute-0 podman[270796]: 2025-11-25 10:01:26.365212162 +0000 UTC m=+0.029544283 container create 7aa8a6dd72554b0dd482111da4cab7aed8f0d024a1b17c44c8f1cf11b11e8689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_benz, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:01:26 compute-0 systemd[1]: Started libpod-conmon-7aa8a6dd72554b0dd482111da4cab7aed8f0d024a1b17c44c8f1cf11b11e8689.scope.
Nov 25 10:01:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:01:26 compute-0 podman[270796]: 2025-11-25 10:01:26.415654478 +0000 UTC m=+0.079986600 container init 7aa8a6dd72554b0dd482111da4cab7aed8f0d024a1b17c44c8f1cf11b11e8689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:01:26 compute-0 podman[270796]: 2025-11-25 10:01:26.42423755 +0000 UTC m=+0.088569662 container start 7aa8a6dd72554b0dd482111da4cab7aed8f0d024a1b17c44c8f1cf11b11e8689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_benz, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:01:26 compute-0 podman[270796]: 2025-11-25 10:01:26.425308169 +0000 UTC m=+0.089640290 container attach 7aa8a6dd72554b0dd482111da4cab7aed8f0d024a1b17c44c8f1cf11b11e8689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_benz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 10:01:26 compute-0 gracious_benz[270810]: 167 167
Nov 25 10:01:26 compute-0 systemd[1]: libpod-7aa8a6dd72554b0dd482111da4cab7aed8f0d024a1b17c44c8f1cf11b11e8689.scope: Deactivated successfully.
Nov 25 10:01:26 compute-0 conmon[270810]: conmon 7aa8a6dd72554b0dd482 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7aa8a6dd72554b0dd482111da4cab7aed8f0d024a1b17c44c8f1cf11b11e8689.scope/container/memory.events
Nov 25 10:01:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2979500548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:26 compute-0 podman[270796]: 2025-11-25 10:01:26.429061636 +0000 UTC m=+0.093393757 container died 7aa8a6dd72554b0dd482111da4cab7aed8f0d024a1b17c44c8f1cf11b11e8689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 25 10:01:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c250b533c6999cfe8a303e1c16631098d760f39df6f5c8da1b7949d0a4ead821-merged.mount: Deactivated successfully.
Nov 25 10:01:26 compute-0 podman[270796]: 2025-11-25 10:01:26.449230142 +0000 UTC m=+0.113562252 container remove 7aa8a6dd72554b0dd482111da4cab7aed8f0d024a1b17c44c8f1cf11b11e8689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_benz, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:01:26 compute-0 podman[270796]: 2025-11-25 10:01:26.353134723 +0000 UTC m=+0.017466854 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:01:26 compute-0 systemd[1]: libpod-conmon-7aa8a6dd72554b0dd482111da4cab7aed8f0d024a1b17c44c8f1cf11b11e8689.scope: Deactivated successfully.
Nov 25 10:01:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:01:26 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1338305608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.515 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.520 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.532 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.534 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:01:26 compute-0 nova_compute[253512]: 2025-11-25 10:01:26.534 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:01:26 compute-0 podman[270834]: 2025-11-25 10:01:26.571726669 +0000 UTC m=+0.027556097 container create fb4a198c8b7e511529c7b4119ce5961a955bbb51c06d85b26466d903f8f93940 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:01:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:26.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:26 compute-0 systemd[1]: Started libpod-conmon-fb4a198c8b7e511529c7b4119ce5961a955bbb51c06d85b26466d903f8f93940.scope.
Nov 25 10:01:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7818c20b5d9b2fb7171543edbf2361c77af8cfe954946e2036b46aba4a41776/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7818c20b5d9b2fb7171543edbf2361c77af8cfe954946e2036b46aba4a41776/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7818c20b5d9b2fb7171543edbf2361c77af8cfe954946e2036b46aba4a41776/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7818c20b5d9b2fb7171543edbf2361c77af8cfe954946e2036b46aba4a41776/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:01:26 compute-0 podman[270834]: 2025-11-25 10:01:26.631062782 +0000 UTC m=+0.086892229 container init fb4a198c8b7e511529c7b4119ce5961a955bbb51c06d85b26466d903f8f93940 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 10:01:26 compute-0 podman[270834]: 2025-11-25 10:01:26.635684977 +0000 UTC m=+0.091514415 container start fb4a198c8b7e511529c7b4119ce5961a955bbb51c06d85b26466d903f8f93940 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goldberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:01:26 compute-0 podman[270834]: 2025-11-25 10:01:26.636841427 +0000 UTC m=+0.092670855 container attach fb4a198c8b7e511529c7b4119ce5961a955bbb51c06d85b26466d903f8f93940 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Nov 25 10:01:26 compute-0 podman[270834]: 2025-11-25 10:01:26.560845212 +0000 UTC m=+0.016674661 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:01:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:27.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:27.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:27.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:27.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:27 compute-0 hardcore_goldberg[270847]: {}
Nov 25 10:01:27 compute-0 lvm[270924]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:01:27 compute-0 lvm[270924]: VG ceph_vg0 finished
Nov 25 10:01:27 compute-0 systemd[1]: libpod-fb4a198c8b7e511529c7b4119ce5961a955bbb51c06d85b26466d903f8f93940.scope: Deactivated successfully.
Nov 25 10:01:27 compute-0 podman[270834]: 2025-11-25 10:01:27.147279474 +0000 UTC m=+0.603108903 container died fb4a198c8b7e511529c7b4119ce5961a955bbb51c06d85b26466d903f8f93940 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goldberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7818c20b5d9b2fb7171543edbf2361c77af8cfe954946e2036b46aba4a41776-merged.mount: Deactivated successfully.
Nov 25 10:01:27 compute-0 podman[270834]: 2025-11-25 10:01:27.167457277 +0000 UTC m=+0.623286706 container remove fb4a198c8b7e511529c7b4119ce5961a955bbb51c06d85b26466d903f8f93940 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goldberg, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:01:27 compute-0 systemd[1]: libpod-conmon-fb4a198c8b7e511529c7b4119ce5961a955bbb51c06d85b26466d903f8f93940.scope: Deactivated successfully.
Nov 25 10:01:27 compute-0 sudo[270720]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:01:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:01:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:01:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:01:27 compute-0 sudo[270935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:01:27 compute-0 sudo[270935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:27 compute-0 sudo[270935]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:27 compute-0 ceph-mon[74207]: pgmap v907: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Nov 25 10:01:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1338305608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:27 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:01:27 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:01:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2689774546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:27 compute-0 nova_compute[253512]: 2025-11-25 10:01:27.530 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:27 compute-0 nova_compute[253512]: 2025-11-25 10:01:27.531 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:01:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/589932064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:28 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v908: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 518 B/s rd, 0 op/s
Nov 25 10:01:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:28.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/589932064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:28 compute-0 nova_compute[253512]: 2025-11-25 10:01:28.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:28.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:28.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:28.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:28.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:28.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:29 compute-0 ceph-mon[74207]: pgmap v908: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 518 B/s rd, 0 op/s
Nov 25 10:01:29 compute-0 nova_compute[253512]: 2025-11-25 10:01:29.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:29 compute-0 nova_compute[253512]: 2025-11-25 10:01:29.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:01:29 compute-0 nova_compute[253512]: 2025-11-25 10:01:29.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:01:29 compute-0 nova_compute[253512]: 2025-11-25 10:01:29.482 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:01:29 compute-0 nova_compute[253512]: 2025-11-25 10:01:29.482 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:29 compute-0 nova_compute[253512]: 2025-11-25 10:01:29.482 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:01:29 compute-0 nova_compute[253512]: 2025-11-25 10:01:29.792 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:01:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:01:29 compute-0 podman[270964]: 2025-11-25 10:01:29.974596167 +0000 UTC m=+0.039075795 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 25 10:01:30 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v909: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 518 B/s rd, 0 op/s
Nov 25 10:01:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:01:30] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:01:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:01:30] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:01:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:30.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:01:30 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2805373666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:01:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:30.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:31 compute-0 nova_compute[253512]: 2025-11-25 10:01:31.140 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:31 compute-0 ceph-mon[74207]: pgmap v909: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 518 B/s rd, 0 op/s
Nov 25 10:01:32 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v910: 337 pgs: 337 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 10:01:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:32.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:32 compute-0 nova_compute[253512]: 2025-11-25 10:01:32.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:01:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:32.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:33 compute-0 ceph-mon[74207]: pgmap v910: 337 pgs: 337 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 10:01:34 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v911: 337 pgs: 337 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 10:01:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:34.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:34 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/295942382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:01:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:34.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:34 compute-0 nova_compute[253512]: 2025-11-25 10:01:34.794 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:35 compute-0 ceph-mon[74207]: pgmap v911: 337 pgs: 337 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 10:01:35 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1407742521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:01:36 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v912: 337 pgs: 337 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 10:01:36 compute-0 nova_compute[253512]: 2025-11-25 10:01:36.143 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:36.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:36.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:36 compute-0 podman[270987]: 2025-11-25 10:01:36.988768549 +0000 UTC m=+0.052416048 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 10:01:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:37.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:37.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:37.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:37.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:37 compute-0 ceph-mon[74207]: pgmap v912: 337 pgs: 337 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 10:01:38 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v913: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 10:01:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:38.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:38 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 10:01:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 10:01:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:38.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 10:01:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:38.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:38.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:38.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:38.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:39 compute-0 ceph-mon[74207]: pgmap v913: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 10:01:39 compute-0 nova_compute[253512]: 2025-11-25 10:01:39.796 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:40 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v914: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 10:01:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:01:40] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:01:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:01:40] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:01:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:40.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:40.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:41 compute-0 nova_compute[253512]: 2025-11-25 10:01:41.144 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:41 compute-0 ceph-mon[74207]: pgmap v914: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 10:01:42 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v915: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 10:01:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:42.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:01:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:42.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:01:42 compute-0 podman[271017]: 2025-11-25 10:01:42.976438348 +0000 UTC m=+0.041084242 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:01:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:43 compute-0 sudo[271034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:01:43 compute-0 sudo[271034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:01:43 compute-0 sudo[271034]: pam_unix(sudo:session): session closed for user root
Nov 25 10:01:43 compute-0 ceph-mon[74207]: pgmap v915: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 10:01:44 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v916: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 10:01:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:44.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:01:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:44.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:01:44 compute-0 nova_compute[253512]: 2025-11-25 10:01:44.798 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_10:01:44
Nov 25 10:01:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:01:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 10:01:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', '.rgw.root', '.nfs', '.mgr', 'images', 'volumes']
Nov 25 10:01:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 10:01:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:01:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:01:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:01:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:01:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:01:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:01:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:01:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:01:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:01:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:01:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:01:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:01:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:01:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:01:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:01:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:01:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:01:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:01:45 compute-0 ceph-mon[74207]: pgmap v916: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 10:01:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:01:46 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v917: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 10:01:46 compute-0 nova_compute[253512]: 2025-11-25 10:01:46.147 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:46.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:46.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:47.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:47.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:47.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:47.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-0[79443]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 25 10:01:47 compute-0 ceph-mon[74207]: pgmap v917: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 10:01:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:48 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v918: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 10:01:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:48.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:48.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:48.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:48.855Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:48.855Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:48.855Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:49 compute-0 ceph-mon[74207]: pgmap v918: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 10:01:49 compute-0 nova_compute[253512]: 2025-11-25 10:01:49.800 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:50 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v919: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Nov 25 10:01:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:01:50] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Nov 25 10:01:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:01:50] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Nov 25 10:01:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:50.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:50.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:51 compute-0 nova_compute[253512]: 2025-11-25 10:01:51.149 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:51 compute-0 ceph-mon[74207]: pgmap v919: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Nov 25 10:01:52 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v920: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Nov 25 10:01:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:52.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:52.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:53 compute-0 ceph-mon[74207]: pgmap v920: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Nov 25 10:01:54 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v921: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 10:01:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:01:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:54.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:01:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/816471803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:01:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/816471803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:01:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:54.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:54 compute-0 nova_compute[253512]: 2025-11-25 10:01:54.802 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:01:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:01:55 compute-0 ceph-mon[74207]: pgmap v921: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 10:01:56 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v922: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 10:01:56 compute-0 nova_compute[253512]: 2025-11-25 10:01:56.151 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:56.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:57.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:57.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:57.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:57.078Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:57 compute-0 ceph-mon[74207]: pgmap v922: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 10:01:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:01:58 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v923: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 10:01:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:58.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:01:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:01:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:58.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:01:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:58.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:58.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:58.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:01:58.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:01:59 compute-0 ceph-mon[74207]: pgmap v923: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 10:01:59 compute-0 nova_compute[253512]: 2025-11-25 10:01:59.804 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:01:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:01:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:02:00 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v924: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 267 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 10:02:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:02:00] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Nov 25 10:02:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:02:00] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Nov 25 10:02:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:00.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:02:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:00 compute-0 podman[271077]: 2025-11-25 10:02:00.965499132 +0000 UTC m=+0.030262819 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 10:02:01 compute-0 nova_compute[253512]: 2025-11-25 10:02:01.154 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:01 compute-0 ceph-mon[74207]: pgmap v924: 337 pgs: 337 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 267 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 10:02:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1795209543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:02 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v925: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 10:02:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:02.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:02.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:03 compute-0 sudo[271095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:02:03 compute-0 sudo[271095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:03 compute-0 sudo[271095]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:03 compute-0 ceph-mon[74207]: pgmap v925: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 10:02:04 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v926: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Nov 25 10:02:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:04.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:04.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:04 compute-0 nova_compute[253512]: 2025-11-25 10:02:04.806 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:02:05.390 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:02:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:02:05.390 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:02:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:02:05.390 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:02:05 compute-0 ceph-mon[74207]: pgmap v926: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Nov 25 10:02:06 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v927: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Nov 25 10:02:06 compute-0 nova_compute[253512]: 2025-11-25 10:02:06.156 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:06.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:06.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:07.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:07.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:07.079Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:07.079Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:07 compute-0 ceph-mon[74207]: pgmap v927: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Nov 25 10:02:07 compute-0 podman[271126]: 2025-11-25 10:02:07.994460506 +0000 UTC m=+0.056668445 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:02:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:08 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v928: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 29 op/s
Nov 25 10:02:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:08.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:08.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:08.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:08.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:08.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:08.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:09 compute-0 ceph-mon[74207]: pgmap v928: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 29 op/s
Nov 25 10:02:09 compute-0 nova_compute[253512]: 2025-11-25 10:02:09.807 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:10 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v929: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Nov 25 10:02:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:02:10] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Nov 25 10:02:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:02:10] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Nov 25 10:02:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:10.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.622049) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930622065, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2123, "num_deletes": 251, "total_data_size": 4047544, "memory_usage": 4099656, "flush_reason": "Manual Compaction"}
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 25 10:02:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:10.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930629364, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3937847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24758, "largest_seqno": 26879, "table_properties": {"data_size": 3928612, "index_size": 5729, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19664, "raw_average_key_size": 20, "raw_value_size": 3909867, "raw_average_value_size": 4030, "num_data_blocks": 252, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064721, "oldest_key_time": 1764064721, "file_creation_time": 1764064930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 7349 microseconds, and 5350 cpu microseconds.
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.629393) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3937847 bytes OK
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.629407) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.629727) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.629737) EVENT_LOG_v1 {"time_micros": 1764064930629734, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.629748) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4038960, prev total WAL file size 4038960, number of live WAL files 2.
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.630404) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3845KB)], [56(11MB)]
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930630427, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 16019080, "oldest_snapshot_seqno": -1}
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5851 keys, 13905860 bytes, temperature: kUnknown
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930654341, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 13905860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13866579, "index_size": 23555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148547, "raw_average_key_size": 25, "raw_value_size": 13760951, "raw_average_value_size": 2351, "num_data_blocks": 959, "num_entries": 5851, "num_filter_entries": 5851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764064930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.654471) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 13905860 bytes
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.658853) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 669.2 rd, 580.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.5 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 6368, records dropped: 517 output_compression: NoCompression
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.658866) EVENT_LOG_v1 {"time_micros": 1764064930658861, "job": 30, "event": "compaction_finished", "compaction_time_micros": 23939, "compaction_time_cpu_micros": 19525, "output_level": 6, "num_output_files": 1, "total_output_size": 13905860, "num_input_records": 6368, "num_output_records": 5851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930659442, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930661024, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.630365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.661073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.661077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.661078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.661079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:02:10 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:02:10.661080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:02:11 compute-0 nova_compute[253512]: 2025-11-25 10:02:11.157 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:11 compute-0 ceph-mon[74207]: pgmap v929: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Nov 25 10:02:12 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v930: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Nov 25 10:02:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:12.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:13 compute-0 ceph-mon[74207]: pgmap v930: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Nov 25 10:02:13 compute-0 podman[271155]: 2025-11-25 10:02:13.970478486 +0000 UTC m=+0.035702755 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:02:14 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v931: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:14.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:14.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:14 compute-0 nova_compute[253512]: 2025-11-25 10:02:14.809 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:02:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:02:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:02:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:02:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:02:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:02:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:02:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:02:15 compute-0 ceph-mon[74207]: pgmap v931: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:02:16 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:16 compute-0 nova_compute[253512]: 2025-11-25 10:02:16.160 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:16.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:16.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:17.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:17.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:17.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:17.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:17 compute-0 ceph-mon[74207]: pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:18 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:18.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 10:02:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:18.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 10:02:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:18.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:18.855Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:18.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:18.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:19 compute-0 ceph-mon[74207]: pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:19 compute-0 nova_compute[253512]: 2025-11-25 10:02:19.811 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:20 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v934: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:02:20] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:02:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:02:20] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:02:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:20.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:20.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:21 compute-0 nova_compute[253512]: 2025-11-25 10:02:21.161 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:21 compute-0 ceph-mon[74207]: pgmap v934: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:22 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v935: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:22.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 10:02:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:22.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 10:02:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:23 compute-0 sudo[271180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:02:23 compute-0 sudo[271180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:23 compute-0 sudo[271180]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:23 compute-0 ceph-mon[74207]: pgmap v935: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:24 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v936: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:24.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:24.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3580303334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/364596767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:24 compute-0 nova_compute[253512]: 2025-11-25 10:02:24.813 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:25 compute-0 nova_compute[253512]: 2025-11-25 10:02:25.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:25 compute-0 nova_compute[253512]: 2025-11-25 10:02:25.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:25 compute-0 nova_compute[253512]: 2025-11-25 10:02:25.490 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:02:25 compute-0 nova_compute[253512]: 2025-11-25 10:02:25.490 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:02:25 compute-0 nova_compute[253512]: 2025-11-25 10:02:25.491 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:02:25 compute-0 nova_compute[253512]: 2025-11-25 10:02:25.491 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:02:25 compute-0 nova_compute[253512]: 2025-11-25 10:02:25.491 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:02:25 compute-0 ceph-mon[74207]: pgmap v936: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:02:25 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681619112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:25 compute-0 nova_compute[253512]: 2025-11-25 10:02:25.857 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:02:26 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v937: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.098 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.099 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4644MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.100 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.100 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.150 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.150 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.164 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.186 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:26.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.543 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.549 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.561 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.563 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:02:26 compute-0 nova_compute[253512]: 2025-11-25 10:02:26.563 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:02:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:26.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3681619112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1741411573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:27.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:27.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:27.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:27.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:27 compute-0 sudo[271253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:02:27 compute-0 sudo[271253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:27 compute-0 sudo[271253]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:27 compute-0 sudo[271278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:02:27 compute-0 sudo[271278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:27 compute-0 nova_compute[253512]: 2025-11-25 10:02:27.563 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:27 compute-0 nova_compute[253512]: 2025-11-25 10:02:27.563 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:27 compute-0 nova_compute[253512]: 2025-11-25 10:02:27.563 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:27 compute-0 ceph-mon[74207]: pgmap v937: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/546263598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:27 compute-0 sudo[271278]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:02:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 10:02:27 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:02:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 10:02:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v938: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 25 10:02:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:02:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 10:02:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:02:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 10:02:28 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:02:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 10:02:28 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:02:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:02:28 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:28 compute-0 sudo[271334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:02:28 compute-0 sudo[271334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:28 compute-0 sudo[271334]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:28 compute-0 sudo[271359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 10:02:28 compute-0 sudo[271359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:28 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:02:28 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3470 syncs, 3.43 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2964 writes, 9767 keys, 2964 commit groups, 1.0 writes per commit group, ingest: 10.40 MB, 0.02 MB/s
                                           Interval WAL: 2964 writes, 1326 syncs, 2.24 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 10:02:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 10:02:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:28.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 10:02:28 compute-0 nova_compute[253512]: 2025-11-25 10:02:28.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:28 compute-0 podman[271414]: 2025-11-25 10:02:28.532795557 +0000 UTC m=+0.039317231 container create b4168a73909ad975a3bafd188aab6c9242faa10584cac551be3b0728f4cdf0f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_hamilton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:02:28 compute-0 systemd[1]: Started libpod-conmon-b4168a73909ad975a3bafd188aab6c9242faa10584cac551be3b0728f4cdf0f4.scope.
Nov 25 10:02:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:02:28 compute-0 podman[271414]: 2025-11-25 10:02:28.517001789 +0000 UTC m=+0.023523463 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:02:28 compute-0 podman[271414]: 2025-11-25 10:02:28.617206646 +0000 UTC m=+0.123728320 container init b4168a73909ad975a3bafd188aab6c9242faa10584cac551be3b0728f4cdf0f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_hamilton, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:02:28 compute-0 podman[271414]: 2025-11-25 10:02:28.624095554 +0000 UTC m=+0.130617228 container start b4168a73909ad975a3bafd188aab6c9242faa10584cac551be3b0728f4cdf0f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 25 10:02:28 compute-0 podman[271414]: 2025-11-25 10:02:28.625250251 +0000 UTC m=+0.131771926 container attach b4168a73909ad975a3bafd188aab6c9242faa10584cac551be3b0728f4cdf0f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:02:28 compute-0 nifty_hamilton[271427]: 167 167
Nov 25 10:02:28 compute-0 systemd[1]: libpod-b4168a73909ad975a3bafd188aab6c9242faa10584cac551be3b0728f4cdf0f4.scope: Deactivated successfully.
Nov 25 10:02:28 compute-0 conmon[271427]: conmon b4168a73909ad975a3ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4168a73909ad975a3bafd188aab6c9242faa10584cac551be3b0728f4cdf0f4.scope/container/memory.events
Nov 25 10:02:28 compute-0 podman[271414]: 2025-11-25 10:02:28.632741214 +0000 UTC m=+0.139262889 container died b4168a73909ad975a3bafd188aab6c9242faa10584cac551be3b0728f4cdf0f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 10:02:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:28.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bf34ede9722d75d7a7aac067e6d982ffe181d04fbb86741dadf6957675fac85-merged.mount: Deactivated successfully.
Nov 25 10:02:28 compute-0 podman[271414]: 2025-11-25 10:02:28.654534014 +0000 UTC m=+0.161055688 container remove b4168a73909ad975a3bafd188aab6c9242faa10584cac551be3b0728f4cdf0f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_hamilton, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 10:02:28 compute-0 systemd[1]: libpod-conmon-b4168a73909ad975a3bafd188aab6c9242faa10584cac551be3b0728f4cdf0f4.scope: Deactivated successfully.
Nov 25 10:02:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1877182358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:02:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:02:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:02:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:02:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:02:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:02:28 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:28 compute-0 podman[271449]: 2025-11-25 10:02:28.800908861 +0000 UTC m=+0.037055486 container create 2c86fc7ae49062b2c17d46b4cdc5a398bd6d373674ec6924e8d33fe1ec5fe97a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:02:28 compute-0 systemd[1]: Started libpod-conmon-2c86fc7ae49062b2c17d46b4cdc5a398bd6d373674ec6924e8d33fe1ec5fe97a.scope.
Nov 25 10:02:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:28.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:02:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:28.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:28.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:28.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e435f57d3331faf6f2f5a7b6d175813313059228b62cfdb14d19dee993edbc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e435f57d3331faf6f2f5a7b6d175813313059228b62cfdb14d19dee993edbc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e435f57d3331faf6f2f5a7b6d175813313059228b62cfdb14d19dee993edbc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e435f57d3331faf6f2f5a7b6d175813313059228b62cfdb14d19dee993edbc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e435f57d3331faf6f2f5a7b6d175813313059228b62cfdb14d19dee993edbc2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:28 compute-0 podman[271449]: 2025-11-25 10:02:28.869260993 +0000 UTC m=+0.105407618 container init 2c86fc7ae49062b2c17d46b4cdc5a398bd6d373674ec6924e8d33fe1ec5fe97a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:02:28 compute-0 podman[271449]: 2025-11-25 10:02:28.876757166 +0000 UTC m=+0.112903781 container start 2c86fc7ae49062b2c17d46b4cdc5a398bd6d373674ec6924e8d33fe1ec5fe97a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kepler, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 10:02:28 compute-0 podman[271449]: 2025-11-25 10:02:28.878269708 +0000 UTC m=+0.114416323 container attach 2c86fc7ae49062b2c17d46b4cdc5a398bd6d373674ec6924e8d33fe1ec5fe97a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kepler, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:02:28 compute-0 podman[271449]: 2025-11-25 10:02:28.786394766 +0000 UTC m=+0.022541371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:02:29 compute-0 sweet_kepler[271462]: --> passed data devices: 0 physical, 1 LVM
Nov 25 10:02:29 compute-0 sweet_kepler[271462]: --> All data devices are unavailable
Nov 25 10:02:29 compute-0 systemd[1]: libpod-2c86fc7ae49062b2c17d46b4cdc5a398bd6d373674ec6924e8d33fe1ec5fe97a.scope: Deactivated successfully.
Nov 25 10:02:29 compute-0 podman[271477]: 2025-11-25 10:02:29.233537155 +0000 UTC m=+0.024887114 container died 2c86fc7ae49062b2c17d46b4cdc5a398bd6d373674ec6924e8d33fe1ec5fe97a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kepler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 10:02:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e435f57d3331faf6f2f5a7b6d175813313059228b62cfdb14d19dee993edbc2-merged.mount: Deactivated successfully.
Nov 25 10:02:29 compute-0 podman[271477]: 2025-11-25 10:02:29.271438354 +0000 UTC m=+0.062788302 container remove 2c86fc7ae49062b2c17d46b4cdc5a398bd6d373674ec6924e8d33fe1ec5fe97a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 10:02:29 compute-0 systemd[1]: libpod-conmon-2c86fc7ae49062b2c17d46b4cdc5a398bd6d373674ec6924e8d33fe1ec5fe97a.scope: Deactivated successfully.
Nov 25 10:02:29 compute-0 sudo[271359]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:29 compute-0 sudo[271490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:02:29 compute-0 sudo[271490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:29 compute-0 sudo[271490]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:29 compute-0 sudo[271515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 10:02:29 compute-0 sudo[271515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:29 compute-0 nova_compute[253512]: 2025-11-25 10:02:29.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:29 compute-0 nova_compute[253512]: 2025-11-25 10:02:29.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:02:29 compute-0 ceph-mon[74207]: pgmap v938: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 25 10:02:29 compute-0 podman[271572]: 2025-11-25 10:02:29.758724871 +0000 UTC m=+0.034209400 container create d3825c2d8963037baa7672b0f25fcb6f7d314984aa71392b9d5c49f73c151b4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_bouman, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 10:02:29 compute-0 systemd[1]: Started libpod-conmon-d3825c2d8963037baa7672b0f25fcb6f7d314984aa71392b9d5c49f73c151b4a.scope.
Nov 25 10:02:29 compute-0 nova_compute[253512]: 2025-11-25 10:02:29.815 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:02:29 compute-0 podman[271572]: 2025-11-25 10:02:29.827165079 +0000 UTC m=+0.102649608 container init d3825c2d8963037baa7672b0f25fcb6f7d314984aa71392b9d5c49f73c151b4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_bouman, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 10:02:29 compute-0 podman[271572]: 2025-11-25 10:02:29.839454871 +0000 UTC m=+0.114939390 container start d3825c2d8963037baa7672b0f25fcb6f7d314984aa71392b9d5c49f73c151b4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:02:29 compute-0 podman[271572]: 2025-11-25 10:02:29.746139132 +0000 UTC m=+0.021623671 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:02:29 compute-0 podman[271572]: 2025-11-25 10:02:29.842233039 +0000 UTC m=+0.117717558 container attach d3825c2d8963037baa7672b0f25fcb6f7d314984aa71392b9d5c49f73c151b4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_bouman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:02:29 compute-0 pensive_bouman[271585]: 167 167
Nov 25 10:02:29 compute-0 systemd[1]: libpod-d3825c2d8963037baa7672b0f25fcb6f7d314984aa71392b9d5c49f73c151b4a.scope: Deactivated successfully.
Nov 25 10:02:29 compute-0 podman[271572]: 2025-11-25 10:02:29.843954143 +0000 UTC m=+0.119438682 container died d3825c2d8963037baa7672b0f25fcb6f7d314984aa71392b9d5c49f73c151b4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_bouman, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:02:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d207208b61b925af462f7071c9565ca0b997c2126773444a4be3d22523dbf462-merged.mount: Deactivated successfully.
Nov 25 10:02:29 compute-0 podman[271572]: 2025-11-25 10:02:29.861645759 +0000 UTC m=+0.137130278 container remove d3825c2d8963037baa7672b0f25fcb6f7d314984aa71392b9d5c49f73c151b4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_bouman, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:02:29 compute-0 systemd[1]: libpod-conmon-d3825c2d8963037baa7672b0f25fcb6f7d314984aa71392b9d5c49f73c151b4a.scope: Deactivated successfully.
Nov 25 10:02:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:02:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:02:29 compute-0 podman[271607]: 2025-11-25 10:02:29.994054064 +0000 UTC m=+0.032237491 container create aa7bb44c0f5b8f72da6c798440218795ac060a5a029b33ff58224668b8c59bec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bohr, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 10:02:30 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v939: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 768 B/s rd, 0 op/s
Nov 25 10:02:30 compute-0 systemd[1]: Started libpod-conmon-aa7bb44c0f5b8f72da6c798440218795ac060a5a029b33ff58224668b8c59bec.scope.
Nov 25 10:02:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8621cab7849ebdbe97bafd5287dfb74b0be2a5a1d625819982ca1ae043e765cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8621cab7849ebdbe97bafd5287dfb74b0be2a5a1d625819982ca1ae043e765cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8621cab7849ebdbe97bafd5287dfb74b0be2a5a1d625819982ca1ae043e765cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8621cab7849ebdbe97bafd5287dfb74b0be2a5a1d625819982ca1ae043e765cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:30 compute-0 podman[271607]: 2025-11-25 10:02:30.065310281 +0000 UTC m=+0.103493719 container init aa7bb44c0f5b8f72da6c798440218795ac060a5a029b33ff58224668b8c59bec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bohr, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 10:02:30 compute-0 podman[271607]: 2025-11-25 10:02:30.071998612 +0000 UTC m=+0.110182040 container start aa7bb44c0f5b8f72da6c798440218795ac060a5a029b33ff58224668b8c59bec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 10:02:30 compute-0 podman[271607]: 2025-11-25 10:02:30.07327026 +0000 UTC m=+0.111453698 container attach aa7bb44c0f5b8f72da6c798440218795ac060a5a029b33ff58224668b8c59bec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bohr, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 10:02:30 compute-0 podman[271607]: 2025-11-25 10:02:29.982040032 +0000 UTC m=+0.020223480 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:02:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:02:30] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:02:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:02:30] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:02:30 compute-0 nifty_bohr[271620]: {
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:     "1": [
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:         {
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             "devices": [
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "/dev/loop3"
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             ],
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             "lv_name": "ceph_lv0",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             "lv_size": "21470642176",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             "name": "ceph_lv0",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             "tags": {
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.cluster_name": "ceph",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.crush_device_class": "",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.encrypted": "0",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.osd_id": "1",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.type": "block",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.vdo": "0",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:                 "ceph.with_tpm": "0"
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             },
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             "type": "block",
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:             "vg_name": "ceph_vg0"
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:         }
Nov 25 10:02:30 compute-0 nifty_bohr[271620]:     ]
Nov 25 10:02:30 compute-0 nifty_bohr[271620]: }
Nov 25 10:02:30 compute-0 systemd[1]: libpod-aa7bb44c0f5b8f72da6c798440218795ac060a5a029b33ff58224668b8c59bec.scope: Deactivated successfully.
Nov 25 10:02:30 compute-0 podman[271629]: 2025-11-25 10:02:30.383050703 +0000 UTC m=+0.019114970 container died aa7bb44c0f5b8f72da6c798440218795ac060a5a029b33ff58224668b8c59bec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bohr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8621cab7849ebdbe97bafd5287dfb74b0be2a5a1d625819982ca1ae043e765cb-merged.mount: Deactivated successfully.
Nov 25 10:02:30 compute-0 podman[271629]: 2025-11-25 10:02:30.407235001 +0000 UTC m=+0.043299258 container remove aa7bb44c0f5b8f72da6c798440218795ac060a5a029b33ff58224668b8c59bec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:02:30 compute-0 systemd[1]: libpod-conmon-aa7bb44c0f5b8f72da6c798440218795ac060a5a029b33ff58224668b8c59bec.scope: Deactivated successfully.
Nov 25 10:02:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:30.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:30 compute-0 sudo[271515]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:30 compute-0 nova_compute[253512]: 2025-11-25 10:02:30.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:30 compute-0 nova_compute[253512]: 2025-11-25 10:02:30.473 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:02:30 compute-0 nova_compute[253512]: 2025-11-25 10:02:30.473 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:02:30 compute-0 nova_compute[253512]: 2025-11-25 10:02:30.489 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:02:30 compute-0 sudo[271641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:02:30 compute-0 sudo[271641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:30 compute-0 sudo[271641]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:30 compute-0 sudo[271666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 10:02:30 compute-0 sudo[271666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:30.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:02:30 compute-0 podman[271721]: 2025-11-25 10:02:30.900154409 +0000 UTC m=+0.033226377 container create c99a28e40601880bb30ec8cbfca194eb91af5148ce423fc9cfaab5b6818cbad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_colden, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:02:30 compute-0 systemd[1]: Started libpod-conmon-c99a28e40601880bb30ec8cbfca194eb91af5148ce423fc9cfaab5b6818cbad5.scope.
Nov 25 10:02:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:02:30 compute-0 podman[271721]: 2025-11-25 10:02:30.961379713 +0000 UTC m=+0.094451701 container init c99a28e40601880bb30ec8cbfca194eb91af5148ce423fc9cfaab5b6818cbad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_colden, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 25 10:02:30 compute-0 podman[271721]: 2025-11-25 10:02:30.966287587 +0000 UTC m=+0.099359555 container start c99a28e40601880bb30ec8cbfca194eb91af5148ce423fc9cfaab5b6818cbad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_colden, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:02:30 compute-0 podman[271721]: 2025-11-25 10:02:30.967449558 +0000 UTC m=+0.100521525 container attach c99a28e40601880bb30ec8cbfca194eb91af5148ce423fc9cfaab5b6818cbad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_colden, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:02:30 compute-0 vigilant_colden[271734]: 167 167
Nov 25 10:02:30 compute-0 systemd[1]: libpod-c99a28e40601880bb30ec8cbfca194eb91af5148ce423fc9cfaab5b6818cbad5.scope: Deactivated successfully.
Nov 25 10:02:30 compute-0 podman[271721]: 2025-11-25 10:02:30.972010246 +0000 UTC m=+0.105082224 container died c99a28e40601880bb30ec8cbfca194eb91af5148ce423fc9cfaab5b6818cbad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 10:02:30 compute-0 podman[271721]: 2025-11-25 10:02:30.887074698 +0000 UTC m=+0.020146696 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-eadbcaeb65b1e5b08849ae4afbf10394f60d8048d500e4b23612f05a643af47a-merged.mount: Deactivated successfully.
Nov 25 10:02:31 compute-0 podman[271721]: 2025-11-25 10:02:31.000719526 +0000 UTC m=+0.133791504 container remove c99a28e40601880bb30ec8cbfca194eb91af5148ce423fc9cfaab5b6818cbad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_colden, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 10:02:31 compute-0 systemd[1]: libpod-conmon-c99a28e40601880bb30ec8cbfca194eb91af5148ce423fc9cfaab5b6818cbad5.scope: Deactivated successfully.
Nov 25 10:02:31 compute-0 podman[271740]: 2025-11-25 10:02:31.060474067 +0000 UTC m=+0.065921690 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 10:02:31 compute-0 podman[271772]: 2025-11-25 10:02:31.146681394 +0000 UTC m=+0.031039524 container create 13c11da56305b4e623cdf7336b647f0ffbaa553802dc8033c932741209dc8739 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 10:02:31 compute-0 systemd[1]: Started libpod-conmon-13c11da56305b4e623cdf7336b647f0ffbaa553802dc8033c932741209dc8739.scope.
Nov 25 10:02:31 compute-0 nova_compute[253512]: 2025-11-25 10:02:31.187 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca4eaa02b0e6957879280aee2a5e28b832ecd7f0f1e15413a534b96d4f87ed6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca4eaa02b0e6957879280aee2a5e28b832ecd7f0f1e15413a534b96d4f87ed6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca4eaa02b0e6957879280aee2a5e28b832ecd7f0f1e15413a534b96d4f87ed6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca4eaa02b0e6957879280aee2a5e28b832ecd7f0f1e15413a534b96d4f87ed6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:02:31 compute-0 podman[271772]: 2025-11-25 10:02:31.217936559 +0000 UTC m=+0.102294699 container init 13c11da56305b4e623cdf7336b647f0ffbaa553802dc8033c932741209dc8739 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_carver, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Nov 25 10:02:31 compute-0 podman[271772]: 2025-11-25 10:02:31.223187089 +0000 UTC m=+0.107545208 container start 13c11da56305b4e623cdf7336b647f0ffbaa553802dc8033c932741209dc8739 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_carver, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:02:31 compute-0 podman[271772]: 2025-11-25 10:02:31.224593551 +0000 UTC m=+0.108951681 container attach 13c11da56305b4e623cdf7336b647f0ffbaa553802dc8033c932741209dc8739 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 10:02:31 compute-0 podman[271772]: 2025-11-25 10:02:31.135759902 +0000 UTC m=+0.020118033 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:02:31 compute-0 ceph-mon[74207]: pgmap v939: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 768 B/s rd, 0 op/s
Nov 25 10:02:31 compute-0 stoic_carver[271785]: {}
Nov 25 10:02:31 compute-0 lvm[271861]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:02:31 compute-0 lvm[271861]: VG ceph_vg0 finished
Nov 25 10:02:31 compute-0 podman[271772]: 2025-11-25 10:02:31.864472407 +0000 UTC m=+0.748830537 container died 13c11da56305b4e623cdf7336b647f0ffbaa553802dc8033c932741209dc8739 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_carver, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Nov 25 10:02:31 compute-0 systemd[1]: libpod-13c11da56305b4e623cdf7336b647f0ffbaa553802dc8033c932741209dc8739.scope: Deactivated successfully.
Nov 25 10:02:31 compute-0 systemd[1]: libpod-13c11da56305b4e623cdf7336b647f0ffbaa553802dc8033c932741209dc8739.scope: Consumed 1.008s CPU time.
Nov 25 10:02:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-aca4eaa02b0e6957879280aee2a5e28b832ecd7f0f1e15413a534b96d4f87ed6-merged.mount: Deactivated successfully.
Nov 25 10:02:31 compute-0 podman[271772]: 2025-11-25 10:02:31.893085175 +0000 UTC m=+0.777443305 container remove 13c11da56305b4e623cdf7336b647f0ffbaa553802dc8033c932741209dc8739 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:02:31 compute-0 systemd[1]: libpod-conmon-13c11da56305b4e623cdf7336b647f0ffbaa553802dc8033c932741209dc8739.scope: Deactivated successfully.
Nov 25 10:02:31 compute-0 sudo[271666]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:02:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:02:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:02:31 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:02:32 compute-0 sudo[271875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:02:32 compute-0 sudo[271875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:32 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v940: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1 KiB/s rd, 1 op/s
Nov 25 10:02:32 compute-0 sudo[271875]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:32.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:32.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:32 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:02:32 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:02:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:33 compute-0 nova_compute[253512]: 2025-11-25 10:02:33.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:33 compute-0 ceph-mon[74207]: pgmap v940: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1 KiB/s rd, 1 op/s
Nov 25 10:02:34 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v941: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 768 B/s rd, 0 op/s
Nov 25 10:02:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 10:02:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:34.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 10:02:34 compute-0 nova_compute[253512]: 2025-11-25 10:02:34.468 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:02:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:34.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:34 compute-0 nova_compute[253512]: 2025-11-25 10:02:34.816 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:35 compute-0 ceph-mon[74207]: pgmap v941: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 768 B/s rd, 0 op/s
Nov 25 10:02:36 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v942: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 768 B/s rd, 0 op/s
Nov 25 10:02:36 compute-0 nova_compute[253512]: 2025-11-25 10:02:36.190 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:36.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:36.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:37 compute-0 ceph-mon[74207]: pgmap v942: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 768 B/s rd, 0 op/s
Nov 25 10:02:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:37.070Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:37.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:37.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:37.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:38 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v943: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1 KiB/s rd, 1 op/s
Nov 25 10:02:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:38.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:38.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:38.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:38.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:38.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:38.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:39 compute-0 podman[271906]: 2025-11-25 10:02:39.029966451 +0000 UTC m=+0.084639873 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 10:02:39 compute-0 ceph-mon[74207]: pgmap v943: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1 KiB/s rd, 1 op/s
Nov 25 10:02:39 compute-0 nova_compute[253512]: 2025-11-25 10:02:39.817 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:40 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v944: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:02:40] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:02:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:02:40] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:02:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:40.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:40.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:41 compute-0 ceph-mon[74207]: pgmap v944: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:41 compute-0 nova_compute[253512]: 2025-11-25 10:02:41.191 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:42 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v945: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:42.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:42.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:43 compute-0 ceph-mon[74207]: pgmap v945: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:43 compute-0 sudo[271934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:02:43 compute-0 sudo[271934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:02:43 compute-0 sudo[271934]: pam_unix(sudo:session): session closed for user root
Nov 25 10:02:44 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v946: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:44.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:44.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:44 compute-0 nova_compute[253512]: 2025-11-25 10:02:44.819 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_10:02:44
Nov 25 10:02:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:02:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 10:02:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', 'vms', '.nfs', '.mgr', 'images']
Nov 25 10:02:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 10:02:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:02:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:02:44 compute-0 podman[271961]: 2025-11-25 10:02:44.993739895 +0000 UTC m=+0.051963854 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 10:02:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:02:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:02:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:02:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:02:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:02:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:02:45 compute-0 ceph-mon[74207]: pgmap v946: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:02:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:02:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:02:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:02:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:02:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:02:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:02:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:02:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:02:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:02:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:02:45 compute-0 sshd-session[271978]: Accepted publickey for zuul from 192.168.122.10 port 35658 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 10:02:45 compute-0 systemd-logind[744]: New session 56 of user zuul.
Nov 25 10:02:45 compute-0 systemd[1]: Started Session 56 of User zuul.
Nov 25 10:02:45 compute-0 sshd-session[271978]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:02:45 compute-0 sudo[271982]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 25 10:02:45 compute-0 sudo[271982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:02:46 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v947: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:46 compute-0 nova_compute[253512]: 2025-11-25 10:02:46.193 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:46.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 10:02:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:46.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 10:02:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:47.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:47.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:47.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:47.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:47 compute-0 ceph-mon[74207]: pgmap v947: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:47 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26600 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:47 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26569 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:47 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.16992 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:47 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26609 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:47 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.16998 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:48 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:48 compute-0 ceph-mon[74207]: from='client.26600 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:48 compute-0 ceph-mon[74207]: from='client.26569 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:48 compute-0 ceph-mon[74207]: from='client.16992 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:48 compute-0 ceph-mon[74207]: from='client.26609 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:48 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17004 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:48.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 25 10:02:48 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1586734496' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:02:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:48.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:48.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:48.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:48.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:48.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:49 compute-0 ceph-mon[74207]: from='client.16998 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:49 compute-0 ceph-mon[74207]: pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:49 compute-0 ceph-mon[74207]: from='client.17004 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:49 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2037614428' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:02:49 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4229107572' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:02:49 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1586734496' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:02:49 compute-0 nova_compute[253512]: 2025-11-25 10:02:49.822 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:50 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:02:50] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:02:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:02:50] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:02:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:50.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:50.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:51 compute-0 ceph-mon[74207]: pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:51 compute-0 nova_compute[253512]: 2025-11-25 10:02:51.195 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:52 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:52.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:02:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:52.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:02:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:53 compute-0 ceph-mon[74207]: pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:54 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 10:02:54 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3803642284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:02:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 10:02:54 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3803642284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:02:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:54.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:54 compute-0 nova_compute[253512]: 2025-11-25 10:02:54.824 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:55 compute-0 ceph-mon[74207]: pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:55 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3803642284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:02:55 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/3803642284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:02:55 compute-0 ovs-vsctl[272344]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:02:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:02:56 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:56 compute-0 virtqemud[252911]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 10:02:56 compute-0 virtqemud[252911]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 10:02:56 compute-0 virtqemud[252911]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 10:02:56 compute-0 nova_compute[253512]: 2025-11-25 10:02:56.195 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:56.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:56 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26648 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:56 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: cache status {prefix=cache status} (starting...)
Nov 25 10:02:56 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:02:56 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26623 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:56.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:56 compute-0 lvm[272646]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:02:56 compute-0 lvm[272646]: VG ceph_vg0 finished
Nov 25 10:02:56 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: client ls {prefix=client ls} (starting...)
Nov 25 10:02:56 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:02:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 25 10:02:56 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 25 10:02:56 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:56 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26663 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:56 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26644 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:57.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:57.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:57.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:57.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:57 compute-0 ceph-mon[74207]: pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:02:57 compute-0 ceph-mon[74207]: from='client.26648 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mon[74207]: from='client.26623 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/268889035' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mon[74207]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2345564020' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mon[74207]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mon[74207]: from='client.26663 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26690 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26684 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26680 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 25 10:02:57 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2452178801' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17088 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26720 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26729 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:02:57 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/958389917' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:02:57 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26722 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 25 10:02:57 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/408653654' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 10:02:57 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:02:58 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v953: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:02:58 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 10:02:58 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:02:58 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26743 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.26644 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2036901220' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2019448717' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.26690 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.26684 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.26680 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2452178801' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3117861905' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3077889400' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.17088 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.26720 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.26729 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/958389917' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1589163821' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1237955398' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.26722 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/408653654' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3224074830' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2913515115' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26777 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17130 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: ops {prefix=ops} (starting...)
Nov 25 10:02:58 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:02:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:58.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:58 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26770 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26801 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 25 10:02:58 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2277747532' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:02:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:02:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:02:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:58.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:02:58 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26819 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 25 10:02:58 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:58.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:58.862Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:58.862Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:02:58.863Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:02:58 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: session ls {prefix=session ls} (starting...)
Nov 25 10:02:58 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:02:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 25 10:02:58 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3290073188' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 25 10:02:58 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/117177134' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:02:58 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: status {prefix=status} (starting...)
Nov 25 10:02:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 25 10:02:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1076590568' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26852 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: pgmap v953: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.26743 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.26777 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.17130 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/932111885' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3661723052' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1421500948' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.26770 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.26801 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2277747532' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/763932686' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1910110655' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.26819 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2480608717' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3290073188' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/117177134' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1076590568' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2868420368' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 25 10:02:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2737613877' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26885 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T10:02:59.512+0000 7f5ef14f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 10:02:59 compute-0 ceph-mgr[74476]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 10:02:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 25 10:02:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1977904971' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26851 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:02:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T10:02:59.556+0000 7f5ef14f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 10:02:59 compute-0 ceph-mgr[74476]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 10:02:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 25 10:02:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156290557' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:02:59 compute-0 nova_compute[253512]: 2025-11-25 10:02:59.826 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:02:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 25 10:02:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2821648642' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 25 10:02:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1558713057' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:02:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:02:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v954: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.26852 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/288058564' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3610283719' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2737613877' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2077170816' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2256568961' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.26885 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1977904971' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.26851 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2156290557' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/356955100' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4142727159' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2821648642' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1558713057' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2582059050' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1945648626' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3469210907' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3041939863' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17253 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T10:03:00.210+0000 7f5ef14f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 10:03:00 compute-0 ceph-mgr[74476]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 10:03:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:03:00] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Nov 25 10:03:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:03:00] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Nov 25 10:03:00 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26917 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:00.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:00 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 25 10:03:00 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3965772973' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26923 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 25 10:03:00 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2663197137' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:03:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:00.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:00 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26941 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:00 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 25 10:03:01 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4172689874' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 25 10:03:01 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2005351374' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.26971 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27011 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17316 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: pgmap v954: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.17253 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1131750807' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/736313694' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.26917 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3965772973' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.26923 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2663197137' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/648257444' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/430701055' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.26941 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.26950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/816679850' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/216371832' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4172689874' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2005351374' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:03:01 compute-0 nova_compute[253512]: 2025-11-25 10:03:01.196 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 25 10:03:01 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3838890863' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 25 10:03:01 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754728243' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17340 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17346 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27050 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17370 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:01 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27074 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:02 compute-0 podman[273675]: 2025-11-25 10:03:02.01956623 +0000 UTC m=+0.078228185 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 10:03:02 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17382 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.26971 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.27011 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.17316 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3838890863' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.17337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1754728243' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.17340 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2891068716' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.17346 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.27050 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2721465140' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.17370 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3483676717' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2140606591' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: from='client.27074 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27083 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27104 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 25 10:03:02 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/155015122' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:02.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:02 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17406 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27119 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27131 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:02.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:02 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17427 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27100 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:02 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17439 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:03 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17457 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27133 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.17382 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/193731057' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.27083 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3671190065' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.27104 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/155015122' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.17406 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2366599518' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.27119 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.27131 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3398466942' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/33019584' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.17427 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.27100 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.17439 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3722671052' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27179 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 25 10:03:03 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2732606932' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 25 10:03:03 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2380605087' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:03:03 compute-0 sudo[273978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:03:03 compute-0 sudo[273978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:03 compute-0 sudo[273978]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:03 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17481 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27151 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27200 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17505 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 25 10:03:03 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4290597205' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17517 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:03 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27227 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:04 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17529 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 25 10:03:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/26241685' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.17457 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.27133 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.27179 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2732606932' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2380605087' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.17481 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3116723934' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.27151 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.27200 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2900811919' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.17505 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4290597205' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.17517 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.27227 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4041057590' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/26241685' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/774446042' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17547 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 25 10:03:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003311665' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:03:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:04.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:04.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 25 10:03:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3310589035' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:03:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 25 10:03:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3093908409' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:03:04 compute-0 nova_compute[253512]: 2025-11-25 10:03:04.827 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=14.997002602s) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.493438721s@ mbc={}] exit Reset 0.000053 1 0.000081
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=14.997002602s) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.493438721s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=14.997002602s) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.493438721s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=14.997002602s) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.493438721s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=14.997002602s) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.493438721s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=14.997002602s) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.493438721s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000761 2 0.000034
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=-1 lpr=77 pi=[51,77)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008512 3 0.000023
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=-1 lpr=77 pi=[51,77)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.008544 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=-1 lpr=77 pi=[51,77)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000031 1 0.000057
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 2 0.000027
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=77/78 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=77/78 n=1 ec=49/17 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001197 4 0.000086
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=77/78 n=1 ec=49/17 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=77/78 n=1 ec=49/17 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=77/78 n=1 ec=49/17 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 6422528 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:57.303415+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:26.329133+0000 osd.1 (osd.1) 74 : cluster [DBG] 8.1e deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:26.339863+0000 osd.1 (osd.1) 75 : cluster [DBG] 8.1e deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003592 3 0.000059
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.004716 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 75)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:26.329133+0000 osd.1 (osd.1) 74 : cluster [DBG] 8.1e deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:26.339863+0000 osd.1 (osd.1) 75 : cluster [DBG] 8.1e deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003905 3 0.000069
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.004810 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.003423 5 0.000879
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000092 1 0.000056
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009916 7 0.000061
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009273 7 0.000070
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.004196 5 0.001057
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001477 1 0.000044
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.035518 1 0.000143
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035497 2 0.000097
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000445 1 0.000027
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052354 2 0.000079
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.088426 1 0.000103
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.088682 1 0.000300
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 6356992 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 DELETING pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044689 2 0.000131
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.133190 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.142566 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 DELETING pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096343 2 0.000209
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.185154 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.195100 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:58.303541+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:27.348155+0000 osd.1 (osd.1) 76 : cluster [DBG] 4.16 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:27.358284+0000 osd.1 (osd.1) 77 : cluster [DBG] 4.16 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fcb28000/0x0/0x4ffc00000, data 0x8420a/0xf3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 77)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:27.348155+0000 osd.1 (osd.1) 76 : cluster [DBG] 4.16 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:27.358284+0000 osd.1 (osd.1) 77 : cluster [DBG] 4.16 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.004560 1 0.000175
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.045590 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.050347 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.050376 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.957972527s) [2] async=[2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 255.504089355s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.957801819s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.504089355s@ mbc={}] exit Reset 0.000212 1 0.000289
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.957801819s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.504089355s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.957801819s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.504089355s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.957801819s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.504089355s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.957801819s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.504089355s@ mbc={}] exit Start 0.000188 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.952252 1 0.000209
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.045177 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.050016 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.050039 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.957801819s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.504089355s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.958954811s) [2] async=[2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 255.505615234s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.958662987s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.505615234s@ mbc={}] exit Reset 0.000313 1 0.000356
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.958662987s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.505615234s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.958662987s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.505615234s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.958662987s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.505615234s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.958662987s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.505615234s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80 pruub=14.958662987s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.505615234s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 80 handle_osd_map epochs [80,80], i have 80, src has [1,80]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 6356992 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 705472 data_alloc: 218103808 data_used: 155648
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:35:59.303678+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:28.350260+0000 osd.1 (osd.1) 78 : cluster [DBG] 8.1a scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:28.360875+0000 osd.1 (osd.1) 79 : cluster [DBG] 8.1a scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 79)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:28.350260+0000 osd.1 (osd.1) 78 : cluster [DBG] 8.1a scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:28.360875+0000 osd.1 (osd.1) 79 : cluster [DBG] 8.1a scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 6356992 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:00.303829+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:29.369500+0000 osd.1 (osd.1) 80 : cluster [DBG] 11.18 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:29.380100+0000 osd.1 (osd.1) 81 : cluster [DBG] 11.18 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.568823 6 0.000130
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.569290 6 0.000418
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000598 2 0.000299
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000745 2 0.000359
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 DELETING pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.059975 2 0.000213
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.060692 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.630535 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 DELETING pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.111685 2 0.000086
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112501 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=-1 lpr=80 pi=[51,80)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.681582 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 81)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:29.369500+0000 osd.1 (osd.1) 80 : cluster [DBG] 11.18 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:29.380100+0000 osd.1 (osd.1) 81 : cluster [DBG] 11.18 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 6348800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:01.303945+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:30.376731+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.17 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:30.402156+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.17 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 83)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:30.376731+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.17 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:30.402156+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.17 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 6299648 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 81 heartbeat osd_stat(store_statfs(0x4fcb25000/0x0/0x4ffc00000, data 0x8802e/0xf6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 37.826044 92 0.000228
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 37.828830 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 37.828872 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 37.828904 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.173921585s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 254.001953125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.173894882s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.001953125s@ mbc={}] exit Reset 0.000055 1 0.000102
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.173894882s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.001953125s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.173894882s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.001953125s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.173894882s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.001953125s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.173894882s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.001953125s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.173894882s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.001953125s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 37.828169 92 0.000434
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 37.832754 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 37.832810 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 37.832839 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.171956062s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 254.000564575s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.171936989s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.000564575s@ mbc={}] exit Reset 0.000038 1 0.000086
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.171936989s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.000564575s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.171936989s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.000564575s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.171936989s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.000564575s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.171936989s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.000564575s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 82 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82 pruub=10.171936989s) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.000564575s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 82 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:02.304220+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:31.409524+0000 osd.1 (osd.1) 84 : cluster [DBG] 5.1e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:31.420041+0000 osd.1 (osd.1) 85 : cluster [DBG] 5.1e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.606729 3 0.000028
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.607301 3 0.000031
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.606758 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.607325 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=82) [0] r=-1 lpr=82 pi=[51,82)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000054 1 0.000081
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000036
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000103 1 0.000118
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000032
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000179 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 83 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 85)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:31.409524+0000 osd.1 (osd.1) 84 : cluster [DBG] 5.1e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:31.420041+0000 osd.1 (osd.1) 85 : cluster [DBG] 5.1e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 6275072 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:03.304353+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:32.377563+0000 osd.1 (osd.1) 86 : cluster [DBG] 3.18 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:32.388077+0000 osd.1 (osd.1) 87 : cluster [DBG] 3.18 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002675 4 0.000204
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002937 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003375 4 0.000069
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003500 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 87)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:32.377563+0000 osd.1 (osd.1) 86 : cluster [DBG] 3.18 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:32.388077+0000 osd.1 (osd.1) 87 : cluster [DBG] 3.18 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 6234112 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 700316 data_alloc: 218103808 data_used: 143360
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 84 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b(unlocked)] enter Initial
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=0 pi=[61,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=0 pi=[61,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000020
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000145 1 0.000039
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.322762 5 0.000365
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000049 1 0.000060
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.322605 5 0.000221
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000789 2 0.000035
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000292 1 0.000021
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066896 2 0.000060
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.067037 1 0.000022
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000374 1 0.000036
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031306 2 0.000040
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 84 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:04.305885+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:33.346849+0000 osd.1 (osd.1) 88 : cluster [DBG] 3.19 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:33.357294+0000 osd.1 (osd.1) 89 : cluster [DBG] 3.19 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.13 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.13 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 89)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:33.346849+0000 osd.1 (osd.1) 88 : cluster [DBG] 3.19 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:33.357294+0000 osd.1 (osd.1) 89 : cluster [DBG] 3.19 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.755178 1 0.000121
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.145475 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.148452 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.148507 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.177088737s) [0] async=[0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 261.761322021s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.822537 2 0.000088
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.823513 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176831245s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761322021s@ mbc={}] exit Reset 0.000344 1 0.000555
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176831245s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761322021s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176831245s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761322021s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176831245s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761322021s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176831245s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761322021s@ mbc={}] exit Start 0.000095 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176831245s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761322021s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.724340 1 0.000048
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.145797 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.149340 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.149359 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[51,83)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176769257s) [0] async=[0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 261.761627197s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176725388s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761627197s@ mbc={}] exit Reset 0.000067 1 0.000097
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176725388s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761627197s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176725388s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761627197s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176725388s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761627197s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176725388s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761627197s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85 pruub=15.176725388s) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.761627197s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=84/61 les/c/f=85/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.000855 3 0.000121
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=84/61 les/c/f=85/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=84/61 les/c/f=85/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000053 1 0.000050
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=84/61 les/c/f=85/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=84/61 les/c/f=85/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000007 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=84/61 les/c/f=85/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=84/61 les/c/f=85/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 42'42 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007514 3 0.000038
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=84/61 les/c/f=85/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 42'42 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=84/61 les/c/f=85/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 42'42 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 85 pg[6.b( v 42'42 (0'0,42'42] local-lis/les=84/85 n=1 ec=49/17 lis/c=84/61 les/c/f=85/62/0 sis=84) [1] r=0 lpr=84 pi=[61,84)/1 crt=42'42 mlcod 42'42 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 6217728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:05.306034+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:34.345626+0000 osd.1 (osd.1) 90 : cluster [DBG] 10.13 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:34.356214+0000 osd.1 (osd.1) 91 : cluster [DBG] 10.13 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 91)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:34.345626+0000 osd.1 (osd.1) 90 : cluster [DBG] 10.13 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:34.356214+0000 osd.1 (osd.1) 91 : cluster [DBG] 10.13 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 6217728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:06.306162+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:35.324731+0000 osd.1 (osd.1) 92 : cluster [DBG] 7.1e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:35.335310+0000 osd.1 (osd.1) 93 : cluster [DBG] 7.1e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.979542732s of 10.036883354s, submitted: 80
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.519800 6 0.000057
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.520019 6 0.000329
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000098 1 0.000068
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000164 1 0.000084
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 DELETING pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.049678 3 0.000136
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.049820 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.1a( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=83/84 n=5 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.569666 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 DELETING pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.116446 3 0.000563
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.116663 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 86 pg[9.a( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=83/84 n=6 ec=51/29 lis/c=83/51 les/c/f=84/52/0 sis=85) [0] r=-1 lpr=85 pi=[51,85)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.636852 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 86 heartbeat osd_stat(store_statfs(0x4fcb18000/0x0/0x4ffc00000, data 0x90548/0x102000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 93)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:35.324731+0000 osd.1 (osd.1) 92 : cluster [DBG] 7.1e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:35.335310+0000 osd.1 (osd.1) 93 : cluster [DBG] 7.1e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 6168576 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c44000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:07.306325+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:36.367061+0000 osd.1 (osd.1) 94 : cluster [DBG] 5.1d scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:36.488034+0000 osd.1 (osd.1) 95 : cluster [DBG] 5.1d scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 95)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:36.367061+0000 osd.1 (osd.1) 94 : cluster [DBG] 5.1d scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:36.488034+0000 osd.1 (osd.1) 95 : cluster [DBG] 5.1d scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 6160384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:08.306499+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:37.339014+0000 osd.1 (osd.1) 96 : cluster [DBG] 7.18 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:37.349605+0000 osd.1 (osd.1) 97 : cluster [DBG] 7.18 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 97)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:37.339014+0000 osd.1 (osd.1) 96 : cluster [DBG] 7.18 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:37.349605+0000 osd.1 (osd.1) 97 : cluster [DBG] 7.18 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 6152192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 699258 data_alloc: 218103808 data_used: 135168
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:09.306655+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 4 last_log 101 sent 97 num 4 unsent 4 sending 4
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:38.329844+0000 osd.1 (osd.1) 98 : cluster [DBG] 12.12 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:38.343990+0000 osd.1 (osd.1) 99 : cluster [DBG] 12.12 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:39.284236+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:39.294776+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 101)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:38.329844+0000 osd.1 (osd.1) 98 : cluster [DBG] 12.12 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:38.343990+0000 osd.1 (osd.1) 99 : cluster [DBG] 12.12 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:39.284236+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:39.294776+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 6152192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.b deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.b deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:10.306805+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:40.256142+0000 osd.1 (osd.1) 102 : cluster [DBG] 7.b deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:40.266714+0000 osd.1 (osd.1) 103 : cluster [DBG] 7.b deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 103)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:40.256142+0000 osd.1 (osd.1) 102 : cluster [DBG] 7.b deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:40.266714+0000 osd.1 (osd.1) 103 : cluster [DBG] 7.b deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 6152192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:11.306999+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:41.214757+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:41.225332+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 88 handle_osd_map epochs [89,90], i have 88, src has [1,90]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 89 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=42'42 mlcod 42'42 active+clean] exit Started/Primary/Active/Clean 24.915418 58 0.000415
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 89 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=42'42 mlcod 42'42 active mbc={255={}}] exit Started/Primary/Active 24.985199 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 89 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=42'42 mlcod 42'42 active mbc={255={}}] exit Started/Primary 25.990229 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 89 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=42'42 mlcod 42'42 active mbc={255={}}] exit Started 25.990351 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 89 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=42'42 mlcod 42'42 active mbc={255={}}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 89 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89 pruub=15.017866135s) [0] r=-1 lpr=89 pi=[69,89)/1 crt=42'42 mlcod 42'42 active pruub 268.177459717s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 90 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89 pruub=15.017827034s) [0] r=-1 lpr=89 pi=[69,89)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 268.177459717s@ mbc={}] exit Reset 0.000070 2 0.000133
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 90 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89 pruub=15.017827034s) [0] r=-1 lpr=89 pi=[69,89)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 268.177459717s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 90 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89 pruub=15.017827034s) [0] r=-1 lpr=89 pi=[69,89)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 268.177459717s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 90 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89 pruub=15.017827034s) [0] r=-1 lpr=89 pi=[69,89)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 268.177459717s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 90 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89 pruub=15.017827034s) [0] r=-1 lpr=89 pi=[69,89)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 268.177459717s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 90 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89 pruub=15.017827034s) [0] r=-1 lpr=89 pi=[69,89)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 268.177459717s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 90 handle_osd_map epochs [87,90], i have 90, src has [1,90]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 90 heartbeat osd_stat(store_statfs(0x4fcb13000/0x0/0x4ffc00000, data 0x96988/0x109000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 105)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:41.214757+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:41.225332+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=-1 lpr=89 pi=[69,89)/1 crt=42'42 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.488129 6 0.000067
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=-1 lpr=89 pi=[69,89)/1 crt=42'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=-1 lpr=89 pi=[69,89)/1 crt=42'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 6094848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=-1 lpr=89 pi=[69,89)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006999 3 0.000198
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=-1 lpr=89 pi=[69,89)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.007128 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=-1 lpr=89 pi=[69,89)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=-1 lpr=89 pi=[69,89)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=-1 lpr=89 pi=[69,89)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000056 1 0.000100
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=-1 lpr=89 pi=[69,89)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] lb MIN local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=-1 lpr=89 DELETING pi=[69,89)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008666 2 0.000124
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] lb MIN local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=-1 lpr=89 pi=[69,89)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008780 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 91 pg[6.e( v 42'42 (0'0,42'42] lb MIN local-lis/les=69/70 n=1 ec=49/17 lis/c=69/69 les/c/f=70/70/0 sis=89) [0] r=-1 lpr=89 pi=[69,89)/1 luod=0'0 crt=42'42 mlcod 0'0 active mbc={}] exit Started 0.504151 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:12.307270+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:42.222214+0000 osd.1 (osd.1) 106 : cluster [DBG] 7.4 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:42.232794+0000 osd.1 (osd.1) 107 : cluster [DBG] 7.4 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 107)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:42.222214+0000 osd.1 (osd.1) 106 : cluster [DBG] 7.4 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:42.232794+0000 osd.1 (osd.1) 107 : cluster [DBG] 7.4 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 6070272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:13.307513+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:43.184592+0000 osd.1 (osd.1) 108 : cluster [DBG] 3.1 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:43.195130+0000 osd.1 (osd.1) 109 : cluster [DBG] 3.1 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 109)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:43.184592+0000 osd.1 (osd.1) 108 : cluster [DBG] 3.1 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:43.195130+0000 osd.1 (osd.1) 109 : cluster [DBG] 3.1 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 6070272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 708955 data_alloc: 218103808 data_used: 147456
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.e deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.e deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:14.307717+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:44.234505+0000 osd.1 (osd.1) 110 : cluster [DBG] 12.e deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:44.246651+0000 osd.1 (osd.1) 111 : cluster [DBG] 12.e deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fcb0a000/0x0/0x4ffc00000, data 0x9cbb7/0x112000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 111)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:44.234505+0000 osd.1 (osd.1) 110 : cluster [DBG] 12.e deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:44.246651+0000 osd.1 (osd.1) 111 : cluster [DBG] 12.e deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 5013504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:15.307943+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:45.215026+0000 osd.1 (osd.1) 112 : cluster [DBG] 10.8 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:45.225618+0000 osd.1 (osd.1) 113 : cluster [DBG] 10.8 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 113)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:45.215026+0000 osd.1 (osd.1) 112 : cluster [DBG] 10.8 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:45.225618+0000 osd.1 (osd.1) 113 : cluster [DBG] 10.8 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 6062080 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:16.308169+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:46.194500+0000 osd.1 (osd.1) 114 : cluster [DBG] 7.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:46.205315+0000 osd.1 (osd.1) 115 : cluster [DBG] 7.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 6053888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 115)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:46.194500+0000 osd.1 (osd.1) 114 : cluster [DBG] 7.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:46.205315+0000 osd.1 (osd.1) 115 : cluster [DBG] 7.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.788957596s of 10.835045815s, submitted: 43
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:17.308383+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:47.200912+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.5 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:47.211506+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.5 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 6053888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 117)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:47.200912+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.5 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:47.211506+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.5 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f(unlocked)] enter Initial
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=0 pi=[61,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000079 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=0 pi=[61,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000031
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000139 1 0.000044
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.001868 2 0.000157
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 92 pg[6.f( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.c deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.c deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:18.308583+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:48.152056+0000 osd.1 (osd.1) 118 : cluster [DBG] 12.c deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:48.162685+0000 osd.1 (osd.1) 119 : cluster [DBG] 12.c deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 6037504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 718629 data_alloc: 218103808 data_used: 147456
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 92 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996253 2 0.000090
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 0.998346 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=61/62 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 lc 41'1 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 lc 41'1 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=61/61 les/c/f=62/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 119)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:48.152056+0000 osd.1 (osd.1) 118 : cluster [DBG] 12.c deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:48.162685+0000 osd.1 (osd.1) 119 : cluster [DBG] 12.c deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 lc 41'1 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=92/61 les/c/f=93/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.002513 5 0.001003
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 lc 41'1 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=92/61 les/c/f=93/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 lc 41'1 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=92/61 les/c/f=93/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000090 1 0.000089
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 lc 41'1 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=92/61 les/c/f=93/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 lc 41'1 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=92/61 les/c/f=93/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000016 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 lc 41'1 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=92/61 les/c/f=93/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=92/61 les/c/f=93/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 42'42 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.126785 1 0.000088
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=92/61 les/c/f=93/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 42'42 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=92/61 les/c/f=93/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 42'42 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 93 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=92/93 n=1 ec=49/17 lis/c=92/61 les/c/f=93/62/0 sis=92) [1] r=0 lpr=92 pi=[61,92)/1 crt=42'42 mlcod 42'42 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fcb06000/0x0/0x4ffc00000, data 0x9ee3e/0x115000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:19.308814+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:49.184742+0000 osd.1 (osd.1) 120 : cluster [DBG] 3.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:49.195324+0000 osd.1 (osd.1) 121 : cluster [DBG] 3.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 4939776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 121)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:49.184742+0000 osd.1 (osd.1) 120 : cluster [DBG] 3.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:49.195324+0000 osd.1 (osd.1) 121 : cluster [DBG] 3.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 55.835320 128 0.000464
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 55.837576 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 55.837700 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 55.837863 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94 pruub=8.164560318s) [0] r=-1 lpr=94 pi=[51,94)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 270.002288818s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94 pruub=8.164503098s) [0] r=-1 lpr=94 pi=[51,94)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.002288818s@ mbc={}] exit Reset 0.000127 1 0.000339
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94 pruub=8.164503098s) [0] r=-1 lpr=94 pi=[51,94)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.002288818s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94 pruub=8.164503098s) [0] r=-1 lpr=94 pi=[51,94)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.002288818s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94 pruub=8.164503098s) [0] r=-1 lpr=94 pi=[51,94)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.002288818s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94 pruub=8.164503098s) [0] r=-1 lpr=94 pi=[51,94)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.002288818s@ mbc={}] exit Start 0.000008 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 94 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94 pruub=8.164503098s) [0] r=-1 lpr=94 pi=[51,94)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.002288818s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:20.308941+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:50.166409+0000 osd.1 (osd.1) 122 : cluster [DBG] 7.2 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:50.176983+0000 osd.1 (osd.1) 123 : cluster [DBG] 7.2 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 4923392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 123)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:50.166409+0000 osd.1 (osd.1) 122 : cluster [DBG] 7.2 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:50.176983+0000 osd.1 (osd.1) 123 : cluster [DBG] 7.2 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=-1 lpr=94 pi=[51,94)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.862900 3 0.000044
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=-1 lpr=94 pi=[51,94)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.862949 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=94) [0] r=-1 lpr=94 pi=[51,94)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000130 1 0.000174
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000070 1 0.000065
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000038 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 95 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:21.309138+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:51.164403+0000 osd.1 (osd.1) 124 : cluster [DBG] 7.e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:51.175042+0000 osd.1 (osd.1) 125 : cluster [DBG] 7.e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 4866048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999461 4 0.000102
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999645 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 125)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:51.164403+0000 osd.1 (osd.1) 124 : cluster [DBG] 7.e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:51.175042+0000 osd.1 (osd.1) 125 : cluster [DBG] 7.e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.003072 5 0.000506
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000096 1 0.000063
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000383 1 0.000055
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.014217 2 0.000081
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 96 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fcafb000/0x0/0x4ffc00000, data 0xa51de/0x120000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:22.309512+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:52.192726+0000 osd.1 (osd.1) 126 : cluster [DBG] 10.2 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:52.203340+0000 osd.1 (osd.1) 127 : cluster [DBG] 10.2 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.719934 1 0.000095
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.738041 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 1.737759 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 1.737795 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=95) [0]/[1] async=[0] r=0 lpr=95 pi=[51,95)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97 pruub=15.264823914s) [0] async=[0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 279.703552246s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97 pruub=15.264730453s) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 279.703552246s@ mbc={}] exit Reset 0.000148 1 0.000231
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97 pruub=15.264730453s) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 279.703552246s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97 pruub=15.264730453s) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 279.703552246s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97 pruub=15.264730453s) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 279.703552246s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97 pruub=15.264730453s) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 279.703552246s@ mbc={}] exit Start 0.000049 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 97 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97 pruub=15.264730453s) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 279.703552246s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 4857856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 127)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:52.192726+0000 osd.1 (osd.1) 126 : cluster [DBG] 10.2 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:52.203340+0000 osd.1 (osd.1) 127 : cluster [DBG] 10.2 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:23.309673+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:53.153674+0000 osd.1 (osd.1) 128 : cluster [DBG] 7.f scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:53.164107+0000 osd.1 (osd.1) 129 : cluster [DBG] 7.f scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 4849664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744582 data_alloc: 218103808 data_used: 151552
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 129)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:53.153674+0000 osd.1 (osd.1) 128 : cluster [DBG] 7.f scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:53.164107+0000 osd.1 (osd.1) 129 : cluster [DBG] 7.f scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.487924 6 0.000169
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001259 2 0.000173
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=-1 lpr=97 DELETING pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029260 2 0.000089
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.030552 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 98 pg[9.10( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=95/96 n=6 ec=51/29 lis/c=95/51 les/c/f=96/52/0 sis=97) [0] r=-1 lpr=97 pi=[51,97)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.518582 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:24.309828+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:54.187810+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.3 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:54.205677+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.3 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 4849664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 131)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:54.187810+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.3 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:54.205677+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.3 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:25.310016+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:55.140772+0000 osd.1 (osd.1) 132 : cluster [DBG] 2.4 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:55.151411+0000 osd.1 (osd.1) 133 : cluster [DBG] 2.4 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 4833280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 133)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:55.140772+0000 osd.1 (osd.1) 132 : cluster [DBG] 2.4 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:55.151411+0000 osd.1 (osd.1) 133 : cluster [DBG] 2.4 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:26.310188+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:56.143203+0000 osd.1 (osd.1) 134 : cluster [DBG] 12.8 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:56.153564+0000 osd.1 (osd.1) 135 : cluster [DBG] 12.8 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 4833280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 135)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:56.143203+0000 osd.1 (osd.1) 134 : cluster [DBG] 12.8 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:56.153564+0000 osd.1 (osd.1) 135 : cluster [DBG] 12.8 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:27.310327+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:57.156380+0000 osd.1 (osd.1) 136 : cluster [DBG] 2.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:57.166934+0000 osd.1 (osd.1) 137 : cluster [DBG] 2.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 98 heartbeat osd_stat(store_statfs(0x4fcaf2000/0x0/0x4ffc00000, data 0xab0fa/0x129000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 4800512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 137)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:57.156380+0000 osd.1 (osd.1) 136 : cluster [DBG] 2.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:57.166934+0000 osd.1 (osd.1) 137 : cluster [DBG] 2.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.909895897s of 10.964529991s, submitted: 63
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:28.310493+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:58.166068+0000 osd.1 (osd.1) 138 : cluster [DBG] 7.3 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:58.176596+0000 osd.1 (osd.1) 139 : cluster [DBG] 7.3 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 4800512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746484 data_alloc: 218103808 data_used: 151552
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 139)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:58.166068+0000 osd.1 (osd.1) 138 : cluster [DBG] 7.3 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:58.176596+0000 osd.1 (osd.1) 139 : cluster [DBG] 7.3 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:29.310666+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:59.155836+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:36:59.166414+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 65.635917 145 0.001413
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 65.638565 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 65.638651 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 65.638789 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99 pruub=14.363901138s) [0] r=-1 lpr=99 pi=[51,99)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 286.002502441s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99 pruub=14.363644600s) [0] r=-1 lpr=99 pi=[51,99)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.002502441s@ mbc={}] exit Reset 0.000333 1 0.000886
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99 pruub=14.363644600s) [0] r=-1 lpr=99 pi=[51,99)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.002502441s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99 pruub=14.363644600s) [0] r=-1 lpr=99 pi=[51,99)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.002502441s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99 pruub=14.363644600s) [0] r=-1 lpr=99 pi=[51,99)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.002502441s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99 pruub=14.363644600s) [0] r=-1 lpr=99 pi=[51,99)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.002502441s@ mbc={}] exit Start 0.000094 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 99 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99 pruub=14.363644600s) [0] r=-1 lpr=99 pi=[51,99)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.002502441s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 4792320 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 141)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:59.155836+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:36:59.166414+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=-1 lpr=99 pi=[51,99)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.125286 3 0.000455
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=-1 lpr=99 pi=[51,99)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.125464 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 65.760019 148 0.001783
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 65.764974 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 65.765016 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 65.765043 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=51) [1] r=0 lpr=51 crt=42'1151 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100 pruub=14.240377426s) [0] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 286.005401611s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100 pruub=14.240349770s) [0] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.005401611s@ mbc={}] exit Reset 0.000065 1 0.000302
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100 pruub=14.240349770s) [0] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.005401611s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100 pruub=14.240349770s) [0] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.005401611s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100 pruub=14.240349770s) [0] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.005401611s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100 pruub=14.240349770s) [0] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.005401611s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100 pruub=14.240349770s) [0] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.005401611s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=99) [0] r=-1 lpr=99 pi=[51,99)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.001066 1 0.001403
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000094 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000109 1 0.000456
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 100 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:30.310848+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:00.163764+0000 osd.1 (osd.1) 142 : cluster [DBG] 7.9 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:00.174380+0000 osd.1 (osd.1) 143 : cluster [DBG] 7.9 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fcaec000/0x0/0x4ffc00000, data 0xaf30c/0x12f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 4784128 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 143)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:00.163764+0000 osd.1 (osd.1) 142 : cluster [DBG] 7.9 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:00.174380+0000 osd.1 (osd.1) 143 : cluster [DBG] 7.9 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 100 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004607 4 0.000241
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.004960 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006362 3 0.000033
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.006584 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0] r=-1 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000140 1 0.001127
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000050 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000192 1 0.000348
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000034 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.001895 5 0.001313
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000045 1 0.000055
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000354 1 0.000070
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035442 2 0.000046
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 101 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:31.310982+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:01.177575+0000 osd.1 (osd.1) 144 : cluster [DBG] 5.a scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:01.188173+0000 osd.1 (osd.1) 145 : cluster [DBG] 5.a scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 4767744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 145)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:01.177575+0000 osd.1 (osd.1) 144 : cluster [DBG] 5.a scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:01.188173+0000 osd.1 (osd.1) 145 : cluster [DBG] 5.a scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007505 4 0.000119
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007830 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=51/52 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.971305 1 0.000097
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009310 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.014296 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.014552 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=100) [0]/[1] async=[0] r=0 lpr=100 pi=[51,100)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102 pruub=14.992504120s) [0] async=[0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 288.773162842s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102 pruub=14.992448807s) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 288.773162842s@ mbc={}] exit Reset 0.000087 1 0.000124
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102 pruub=14.992448807s) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 288.773162842s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102 pruub=14.992448807s) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 288.773162842s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102 pruub=14.992448807s) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 288.773162842s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102 pruub=14.992448807s) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 288.773162842s@ mbc={}] exit Start 0.000008 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102 pruub=14.992448807s) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 288.773162842s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.005407 5 0.000272
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000168 1 0.000139
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000342 1 0.000145
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028354 2 0.000057
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 102 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:32.311101+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:02.211835+0000 osd.1 (osd.1) 146 : cluster [DBG] 10.5 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:02.222377+0000 osd.1 (osd.1) 147 : cluster [DBG] 10.5 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.625821 1 0.000099
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.660321 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 1.668178 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 1.668292 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[51,101)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103 pruub=15.344932556s) [0] async=[0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 active pruub 289.785827637s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103 pruub=15.344882011s) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 289.785827637s@ mbc={}] exit Reset 0.000086 1 0.000131
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103 pruub=15.344882011s) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 289.785827637s@ mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103 pruub=15.344882011s) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 289.785827637s@ mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103 pruub=15.344882011s) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 289.785827637s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103 pruub=15.344882011s) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 289.785827637s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103 pruub=15.344882011s) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 289.785827637s@ mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.664418 7 0.000151
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000071 1 0.000037
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=-1 lpr=102 DELETING pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038589 2 0.000206
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038729 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 103 pg[9.11( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=100/101 n=6 ec=51/29 lis/c=100/51 les/c/f=101/52/0 sis=102) [0] r=-1 lpr=102 pi=[51,102)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.703259 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 4743168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 147)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:02.211835+0000 osd.1 (osd.1) 146 : cluster [DBG] 10.5 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:02.222377+0000 osd.1 (osd.1) 147 : cluster [DBG] 10.5 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:33.311269+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:03.185236+0000 osd.1 (osd.1) 148 : cluster [DBG] 7.8 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:03.195855+0000 osd.1 (osd.1) 149 : cluster [DBG] 7.8 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008601 7 0.000089
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000057 1 0.000060
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=-1 lpr=103 DELETING pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030647 2 0.000145
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.030749 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 104 pg[9.12( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=101/102 n=6 ec=51/29 lis/c=101/51 les/c/f=102/52/0 sis=103) [0] r=-1 lpr=103 pi=[51,103)/1 crt=42'1151 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.039399 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 4677632 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 753880 data_alloc: 218103808 data_used: 147456
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 149)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:03.185236+0000 osd.1 (osd.1) 148 : cluster [DBG] 7.8 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:03.195855+0000 osd.1 (osd.1) 149 : cluster [DBG] 7.8 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fcae2000/0x0/0x4ffc00000, data 0xb7156/0x139000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:34.311416+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:04.171854+0000 osd.1 (osd.1) 150 : cluster [DBG] 3.17 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:04.182374+0000 osd.1 (osd.1) 151 : cluster [DBG] 3.17 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 151)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:04.171854+0000 osd.1 (osd.1) 150 : cluster [DBG] 3.17 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:04.182374+0000 osd.1 (osd.1) 151 : cluster [DBG] 3.17 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:35.311536+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:05.192019+0000 osd.1 (osd.1) 152 : cluster [DBG] 7.13 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:05.202463+0000 osd.1 (osd.1) 153 : cluster [DBG] 7.13 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fcae2000/0x0/0x4ffc00000, data 0xb7156/0x139000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 153)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:05.192019+0000 osd.1 (osd.1) 152 : cluster [DBG] 7.13 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:05.202463+0000 osd.1 (osd.1) 153 : cluster [DBG] 7.13 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:36.311687+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:06.166076+0000 osd.1 (osd.1) 154 : cluster [DBG] 12.19 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:06.176676+0000 osd.1 (osd.1) 155 : cluster [DBG] 12.19 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fcae2000/0x0/0x4ffc00000, data 0xb7156/0x139000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 4644864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 155)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:06.166076+0000 osd.1 (osd.1) 154 : cluster [DBG] 12.19 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:06.176676+0000 osd.1 (osd.1) 155 : cluster [DBG] 12.19 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:37.311879+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:07.202089+0000 osd.1 (osd.1) 156 : cluster [DBG] 12.1c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:07.212682+0000 osd.1 (osd.1) 157 : cluster [DBG] 12.1c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 4644864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 157)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:07.202089+0000 osd.1 (osd.1) 156 : cluster [DBG] 12.1c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:07.212682+0000 osd.1 (osd.1) 157 : cluster [DBG] 12.1c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.975328445s of 10.024065971s, submitted: 82
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:38.312079+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:08.189877+0000 osd.1 (osd.1) 158 : cluster [DBG] 7.10 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:08.200570+0000 osd.1 (osd.1) 159 : cluster [DBG] 7.10 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 4636672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 758782 data_alloc: 218103808 data_used: 147456
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 159)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:08.189877+0000 osd.1 (osd.1) 158 : cluster [DBG] 7.10 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:08.200570+0000 osd.1 (osd.1) 159 : cluster [DBG] 7.10 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:39.312269+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:09.143524+0000 osd.1 (osd.1) 160 : cluster [DBG] 5.14 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:09.154011+0000 osd.1 (osd.1) 161 : cluster [DBG] 5.14 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 4628480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 161)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:09.143524+0000 osd.1 (osd.1) 160 : cluster [DBG] 5.14 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:09.154011+0000 osd.1 (osd.1) 161 : cluster [DBG] 5.14 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:40.312412+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:10.182727+0000 osd.1 (osd.1) 162 : cluster [DBG] 3.12 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:10.193324+0000 osd.1 (osd.1) 163 : cluster [DBG] 3.12 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 4628480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 163)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:10.182727+0000 osd.1 (osd.1) 162 : cluster [DBG] 3.12 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:10.193324+0000 osd.1 (osd.1) 163 : cluster [DBG] 3.12 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:41.312563+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:11.201828+0000 osd.1 (osd.1) 164 : cluster [DBG] 5.17 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:11.212373+0000 osd.1 (osd.1) 165 : cluster [DBG] 5.17 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 4620288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 165)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:11.201828+0000 osd.1 (osd.1) 164 : cluster [DBG] 5.17 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:11.212373+0000 osd.1 (osd.1) 165 : cluster [DBG] 5.17 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:42.313602+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:12.154572+0000 osd.1 (osd.1) 166 : cluster [DBG] 10.18 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:12.165274+0000 osd.1 (osd.1) 167 : cluster [DBG] 10.18 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fcadb000/0x0/0x4ffc00000, data 0xbb32e/0x13f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 5652480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 167)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:12.154572+0000 osd.1 (osd.1) 166 : cluster [DBG] 10.18 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:12.165274+0000 osd.1 (osd.1) 167 : cluster [DBG] 10.18 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:43.314018+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:13.124390+0000 osd.1 (osd.1) 168 : cluster [DBG] 10.19 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:13.134697+0000 osd.1 (osd.1) 169 : cluster [DBG] 10.19 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 5644288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781522 data_alloc: 218103808 data_used: 147456
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 169)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:13.124390+0000 osd.1 (osd.1) 168 : cluster [DBG] 10.19 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:13.134697+0000 osd.1 (osd.1) 169 : cluster [DBG] 10.19 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:44.314204+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:14.170027+0000 osd.1 (osd.1) 170 : cluster [DBG] 10.1b scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:14.180335+0000 osd.1 (osd.1) 171 : cluster [DBG] 10.1b scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fcad1000/0x0/0x4ffc00000, data 0xc14fb/0x148000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 5636096 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 171)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:14.170027+0000 osd.1 (osd.1) 170 : cluster [DBG] 10.1b scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:14.180335+0000 osd.1 (osd.1) 171 : cluster [DBG] 10.1b scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:45.314394+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:15.135145+0000 osd.1 (osd.1) 172 : cluster [DBG] 7.1b scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:15.145839+0000 osd.1 (osd.1) 173 : cluster [DBG] 7.1b scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 5627904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 109 handle_osd_map epochs [110,111], i have 109, src has [1,111]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 173)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:15.135145+0000 osd.1 (osd.1) 172 : cluster [DBG] 7.1b scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:15.145839+0000 osd.1 (osd.1) 173 : cluster [DBG] 7.1b scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:46.314541+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:16.135960+0000 osd.1 (osd.1) 174 : cluster [DBG] 2.1e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:16.145819+0000 osd.1 (osd.1) 175 : cluster [DBG] 2.1e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 5537792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 175)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:16.135960+0000 osd.1 (osd.1) 174 : cluster [DBG] 2.1e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:16.145819+0000 osd.1 (osd.1) 175 : cluster [DBG] 2.1e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:47.314696+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:17.102357+0000 osd.1 (osd.1) 176 : cluster [DBG] 5.19 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:17.112960+0000 osd.1 (osd.1) 177 : cluster [DBG] 5.19 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 5529600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.b deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.b deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 177)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:17.102357+0000 osd.1 (osd.1) 176 : cluster [DBG] 5.19 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:17.112960+0000 osd.1 (osd.1) 177 : cluster [DBG] 5.19 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 111 heartbeat osd_stat(store_statfs(0x4fcacd000/0x0/0x4ffc00000, data 0xc55f1/0x14e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 111 handle_osd_map epochs [112,113], i have 111, src has [1,113]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.971641541s of 10.007240295s, submitted: 39
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 111 handle_osd_map epochs [112,113], i have 113, src has [1,113]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:48.314823+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:18.131104+0000 osd.1 (osd.1) 178 : cluster [DBG] 12.b deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:18.141630+0000 osd.1 (osd.1) 179 : cluster [DBG] 12.b deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 5488640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 798247 data_alloc: 218103808 data_used: 155648
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 179)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:18.131104+0000 osd.1 (osd.1) 178 : cluster [DBG] 12.b deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:18.141630+0000 osd.1 (osd.1) 179 : cluster [DBG] 12.b deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:49.314971+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:19.121477+0000 osd.1 (osd.1) 180 : cluster [DBG] 10.15 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:19.135662+0000 osd.1 (osd.1) 181 : cluster [DBG] 10.15 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xc954a/0x154000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 5488640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 181)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:19.121477+0000 osd.1 (osd.1) 180 : cluster [DBG] 10.15 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:19.135662+0000 osd.1 (osd.1) 181 : cluster [DBG] 10.15 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:50.315131+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:20.141271+0000 osd.1 (osd.1) 182 : cluster [DBG] 10.14 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:20.155390+0000 osd.1 (osd.1) 183 : cluster [DBG] 10.14 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xc954a/0x154000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 5488640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.a scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.a scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 183)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:20.141271+0000 osd.1 (osd.1) 182 : cluster [DBG] 10.14 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:20.155390+0000 osd.1 (osd.1) 183 : cluster [DBG] 10.14 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:51.315250+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:21.131187+0000 osd.1 (osd.1) 184 : cluster [DBG] 12.a scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:21.141522+0000 osd.1 (osd.1) 185 : cluster [DBG] 12.a scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xc954a/0x154000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 114 ms_handle_reset con 0x564fb4219c00 session 0x564fb4f95c20
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 114 ms_handle_reset con 0x564fb501a400 session 0x564fb54bd860
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 5480448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 185)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:21.131187+0000 osd.1 (osd.1) 184 : cluster [DBG] 12.a scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:21.141522+0000 osd.1 (osd.1) 185 : cluster [DBG] 12.a scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:52.315391+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:22.125072+0000 osd.1 (osd.1) 186 : cluster [DBG] 12.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:22.135430+0000 osd.1 (osd.1) 187 : cluster [DBG] 12.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 5472256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19(unlocked)] enter Initial
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=0 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=0 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000021
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000108 1 0.000041
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000027 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000150 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 187)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:22.125072+0000 osd.1 (osd.1) 186 : cluster [DBG] 12.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:22.135430+0000 osd.1 (osd.1) 187 : cluster [DBG] 12.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:53.315509+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:23.115510+0000 osd.1 (osd.1) 188 : cluster [DBG] 12.10 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:23.129780+0000 osd.1 (osd.1) 189 : cluster [DBG] 12.10 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 5455872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811456 data_alloc: 218103808 data_used: 155648
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001400 2 0.000050
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001821 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001845 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=0 lpr=115 pi=[80,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000208 1 0.000484
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 189)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:23.115510+0000 osd.1 (osd.1) 188 : cluster [DBG] 12.10 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:23.129780+0000 osd.1 (osd.1) 189 : cluster [DBG] 12.10 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:54.315654+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:24.067623+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.4 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:24.092347+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.4 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 116 heartbeat osd_stat(store_statfs(0x4fcac0000/0x0/0x4ffc00000, data 0xcd722/0x15a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 5439488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 191)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:24.067623+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.4 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:24.092347+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.4 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a(unlocked)] enter Initial
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=0 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=0 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000052
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000062 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000222 1 0.000156
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000503 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000813 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.19( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.012518 6 0.000070
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.19( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.19( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.19( v 42'1151 lc 35'172 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001392 3 0.000109
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.19( v 42'1151 lc 35'172 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.19( v 42'1151 lc 35'172 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.007986 1 0.000026
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.19( v 42'1151 lc 35'172 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049959 1 0.000044
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:55.315787+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:25.074498+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:25.088649+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 5373952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.944483 1 0.000043
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.005123 2 0.000640
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.006130 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.006241 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 118 handle_osd_map epochs [117,118], i have 118, src has [1,118]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000313 1 0.000511
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000085 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.004779 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.017338 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[80,116)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000192 1 0.001124
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000081 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 193)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:25.074498+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.6 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:25.088649+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.6 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002451 2 0.000177
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: merge_log_dups log.dups.size()=0olog.dups.size()=44
Nov 25 10:03:04 compute-0 ceph-osd[82261]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=44
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000456 2 0.000095
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:56.315930+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:26.064831+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.0 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:26.089023+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.0 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xd1871/0x161000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 5373952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fcab5000/0x0/0x4ffc00000, data 0xd3862/0x164000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 118 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001562 2 0.000068
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004562 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 195)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:26.064831+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.0 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:26.089023+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.0 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.1a( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.006215 6 0.000165
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.1a( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.1a( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/80 les/c/f=119/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001359 4 0.000174
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/80 les/c/f=119/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/80 les/c/f=119/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/80 les/c/f=119/81/0 sis=118) [1] r=0 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.1a( v 42'1151 lc 35'403 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001961 3 0.000095
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.1a( v 42'1151 lc 35'403 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.1a( v 42'1151 lc 35'403 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000091 1 0.000067
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.1a( v 42'1151 lc 35'403 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.029265 1 0.000075
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 119 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:57.316061+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:27.021827+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.b scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:27.035943+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.b scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.459336 1 0.000026
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.491220 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 1.497583 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000287 1 0.000841
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000045 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001985 2 0.000138
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: merge_log_dups log.dups.size()=0olog.dups.size()=27
Nov 25 10:03:04 compute-0 ceph-osd[82261]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=27
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001213 2 0.000067
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 120 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 4292608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 197)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:27.021827+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.b scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:27.035943+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.b scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:58.316200+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:28.012628+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.9 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:28.023232+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.9 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.459645271s of 10.519359589s, submitted: 66
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001240 2 0.000059
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004505 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=118/119 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=120/121 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=120/121 n=5 ec=51/29 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=120/121 n=5 ec=51/29 lis/c=120/85 les/c/f=121/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000976 3 0.000104
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=120/121 n=5 ec=51/29 lis/c=120/85 les/c/f=121/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=120/121 n=5 ec=51/29 lis/c=120/85 les/c/f=121/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 121 pg[9.1a( v 42'1151 (0'0,42'1151] local-lis/les=120/121 n=5 ec=51/29 lis/c=120/85 les/c/f=121/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 4284416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853947 data_alloc: 218103808 data_used: 155648
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 199)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:28.012628+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.9 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:28.023232+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.9 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:36:59.316353+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:29.039436+0000 osd.1 (osd.1) 200 : cluster [DBG] 6.c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:29.053611+0000 osd.1 (osd.1) 201 : cluster [DBG] 6.c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 4276224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 201)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:29.039436+0000 osd.1 (osd.1) 200 : cluster [DBG] 6.c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:29.053611+0000 osd.1 (osd.1) 201 : cluster [DBG] 6.c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:00.316529+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:30.074409+0000 osd.1 (osd.1) 202 : cluster [DBG] 6.f scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:30.095566+0000 osd.1 (osd.1) 203 : cluster [DBG] 6.f scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 4399104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 203)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:30.074409+0000 osd.1 (osd.1) 202 : cluster [DBG] 6.f scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:30.095566+0000 osd.1 (osd.1) 203 : cluster [DBG] 6.f scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b(unlocked)] enter Initial
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=0 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000046 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=0 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000026
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000201 1 0.000041
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000208 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000601 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:01.316666+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:31.066837+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.14 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:31.095091+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.14 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fcaae000/0x0/0x4ffc00000, data 0xd97ec/0x16e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 4399104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 205)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:31.066837+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.14 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:31.095091+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.14 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 122 handle_osd_map epochs [122,123], i have 123, src has [1,123]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.008847 2 0.000580
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.009685 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.009730 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=0 lpr=122 pi=[64,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000156 1 0.000388
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000080 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 123 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:02.316783+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:32.097829+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:32.125757+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb4219c00
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 4382720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 207)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:32.097829+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:32.125757+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:03.316935+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:33.108078+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.2 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:33.136282+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.2 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 4366336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 864637 data_alloc: 218103808 data_used: 159744
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 124 pg[9.1b( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.835468 5 0.000162
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 124 pg[9.1b( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 124 pg[9.1b( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 124 pg[9.1b( v 42'1151 lc 35'534 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002601 4 0.000121
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 124 pg[9.1b( v 42'1151 lc 35'534 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 124 pg[9.1b( v 42'1151 lc 35'534 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000035 1 0.000074
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 124 pg[9.1b( v 42'1151 lc 35'534 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.045603 1 0.000038
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 209)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:33.108078+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.2 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:33.136282+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.2 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.131782 1 0.000018
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.180249 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.015867 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[64,123)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000239 1 0.000422
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000043 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000175
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 25 10:03:04 compute-0 ceph-osd[82261]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000612 3 0.000101
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:04.317062+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:34.097801+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.0 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:34.140152+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.0 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 4325376 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 211)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:34.097801+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.0 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:34.140152+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.0 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 125 handle_osd_map epochs [125,126], i have 126, src has [1,126]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005716 2 0.000128
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006591 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=125/126 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=125/126 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=125/126 n=5 ec=51/29 lis/c=125/64 les/c/f=126/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000741 3 0.000164
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=125/126 n=5 ec=51/29 lis/c=125/64 les/c/f=126/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=125/126 n=5 ec=51/29 lis/c=125/64 les/c/f=126/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=125/126 n=5 ec=51/29 lis/c=125/64 les/c/f=126/65/0 sis=125) [1] r=0 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:05.317194+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 213 sent 211 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:35.064244+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.1 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:35.099491+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.1 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fca9f000/0x0/0x4ffc00000, data 0xe19de/0x17b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 4300800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 213)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:35.064244+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.1 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:35.099491+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.1 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:06.317309+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 215 sent 213 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:36.108218+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.4 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:36.154168+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.4 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 4284416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 126 handle_osd_map epochs [127,128], i have 126, src has [1,128]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 215)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:36.108218+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.4 scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:36.154168+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.4 scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:07.317419+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 217 sent 215 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:37.127988+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.1c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:37.163318+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.1c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 5316608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fca97000/0x0/0x4ffc00000, data 0xe7a61/0x184000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 217)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:37.127988+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.1c scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:37.163318+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.1c scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:08.317536+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 219 sent 217 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:38.172354+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.1a scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:38.197000+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.1a scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb4fc9c00
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fca97000/0x0/0x4ffc00000, data 0xe7a61/0x184000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.038012505s of 10.082924843s, submitted: 52
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 5292032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 894693 data_alloc: 218103808 data_used: 159744
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.19 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.19 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 219)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:38.172354+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.1a scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:38.197000+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.1a scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:09.317660+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 221 sent 219 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:39.203953+0000 osd.1 (osd.1) 220 : cluster [DBG] 9.19 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:39.239251+0000 osd.1 (osd.1) 221 : cluster [DBG] 9.19 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 5234688 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1b deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1b deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:10.317800+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 4 last_log 223 sent 221 num 4 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:40.223492+0000 osd.1 (osd.1) 222 : cluster [DBG] 9.1b deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:40.240445+0000 osd.1 (osd.1) 223 : cluster [DBG] 9.1b deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 221)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:39.203953+0000 osd.1 (osd.1) 220 : cluster [DBG] 9.19 deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:39.239251+0000 osd.1 (osd.1) 221 : cluster [DBG] 9.19 deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fca91000/0x0/0x4ffc00000, data 0xeb9ba/0x18a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 5218304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:11.317920+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 223)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:40.223492+0000 osd.1 (osd.1) 222 : cluster [DBG] 9.1b deep-scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:40.240445+0000 osd.1 (osd.1) 223 : cluster [DBG] 9.1b deep-scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 5218304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:12.318028+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 5210112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e(unlocked)] enter Initial
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=0 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000074 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=0 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000028 1 0.000055
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000089 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000165 1 0.000214
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000042 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000259 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:13.318189+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fca8d000/0x0/0x4ffc00000, data 0xedaa6/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.549001 2 0.000110
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.549364 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.549513 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=131) [1] r=0 lpr=131 pi=[72,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000150 1 0.000337
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f(unlocked)] enter Initial
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=0 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000168 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000045 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=0 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000031 1 0.000118
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000044 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000207
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000047 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000209 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fca8d000/0x0/0x4ffc00000, data 0xedaa6/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 5201920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907359 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:14.318333+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.003998 2 0.000163
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.004260 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.004356 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=132) [1] r=0 lpr=132 pi=[95,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000112 1 0.000177
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000046 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 10:03:04 compute-0 ceph-osd[82261]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1e( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.004821 6 0.000223
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1e( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1e( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1e( v 42'1151 lc 35'698 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002088 3 0.000122
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1e( v 42'1151 lc 35'698 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1e( v 42'1151 lc 35'698 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000079 1 0.000050
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1e( v 42'1151 lc 35'698 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035902 1 0.000061
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 133 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 5103616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:15.318453+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.977162 1 0.000053
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.015332 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.020264 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=132) [1]/[0] r=-1 lpr=132 pi=[72,132)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000054 1 0.000087
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1f( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.015970 5 0.000306
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1f( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1f( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000539 1 0.000544
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: merge_log_dups log.dups.size()=0olog.dups.size()=30
Nov 25 10:03:04 compute-0 ceph-osd[82261]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=30
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000477 3 0.000295
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1f( v 42'1151 lc 35'562 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001593 4 0.000321
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1f( v 42'1151 lc 35'562 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1f( v 42'1151 lc 35'562 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000059 1 0.000025
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1f( v 42'1151 lc 35'562 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035597 1 0.000020
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 134 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 5029888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:16.318575+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.968962 1 0.000043
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.006293 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.022361 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[95,133)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000052 1 0.000083
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006196 2 0.000048
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000708 2 0.000491
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 10:03:04 compute-0 ceph-osd[82261]: merge_log_dups log.dups.size()=0olog.dups.size()=33
Nov 25 10:03:04 compute-0 ceph-osd[82261]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=33
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000451 2 0.000060
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007453 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=132/133 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=134/135 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=134/135 n=5 ec=51/29 lis/c=132/72 les/c/f=133/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=134/135 n=5 ec=51/29 lis/c=134/72 les/c/f=135/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000823 4 0.001343
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=134/135 n=5 ec=51/29 lis/c=134/72 les/c/f=135/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=134/135 n=5 ec=51/29 lis/c=134/72 les/c/f=135/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 135 pg[9.1e( v 42'1151 (0'0,42'1151] local-lis/les=134/135 n=5 ec=51/29 lis/c=134/72 les/c/f=135/73/0 sis=134) [1] r=0 lpr=134 pi=[72,134)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 5021696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:17.318702+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 225 sent 223 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:47.051018+0000 osd.1 (osd.1) 224 : cluster [DBG] 9.1e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:47.078935+0000 osd.1 (osd.1) 225 : cluster [DBG] 9.1e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 225)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:47.051018+0000 osd.1 (osd.1) 224 : cluster [DBG] 9.1e scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:47.078935+0000 osd.1 (osd.1) 225 : cluster [DBG] 9.1e scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 135 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008043 2 0.000040
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009263 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=133/134 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=135/136 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=135/136 n=5 ec=51/29 lis/c=133/95 les/c/f=134/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=135/136 n=5 ec=51/29 lis/c=135/95 les/c/f=136/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001391 4 0.000334
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=135/136 n=5 ec=51/29 lis/c=135/95 les/c/f=136/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=135/136 n=5 ec=51/29 lis/c=135/95 les/c/f=136/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 pg_epoch: 136 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=135/136 n=5 ec=51/29 lis/c=135/95 les/c/f=136/96/0 sis=135) [1] r=0 lpr=135 pi=[95,135)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 4980736 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:18.318825+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  log_queue is 2 last_log 227 sent 225 num 2 unsent 2 sending 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:48.065922+0000 osd.1 (osd.1) 226 : cluster [DBG] 9.1f scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  will send 2025-11-25T09:37:48.094183+0000 osd.1 (osd.1) 227 : cluster [DBG] 9.1f scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client handle_log_ack log(last 227)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:48.065922+0000 osd.1 (osd.1) 226 : cluster [DBG] 9.1f scrub starts
Nov 25 10:03:04 compute-0 ceph-osd[82261]: log_client  logged 2025-11-25T09:37:48.094183+0000 osd.1 (osd.1) 227 : cluster [DBG] 9.1f scrub ok
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 4980736 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938210 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:19.318958+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 4972544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:20.319072+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 4972544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:21.319159+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 4972544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:22.319274+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 4972544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:23.319396+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 4964352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938210 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:24.319494+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 4964352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:25.319606+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501b400 session 0x564fb54bde00
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3b7b400 session 0x564fb52dc960
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 4956160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:26.319710+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 4956160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:27.319834+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 4947968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:28.319940+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 4947968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938210 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:29.320037+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 4939776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:30.320138+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 4939776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:31.320238+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 4931584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:32.320349+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 4923392 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:33.320446+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb4fc9c00 session 0x564fb30af680
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb4219c00 session 0x564fb3cd21e0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 4923392 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938210 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:34.320548+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4915200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:35.320638+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4915200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.354000092s of 27.401119232s, submitted: 60
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:36.320756+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4915200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:37.320862+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4915200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:38.320982+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 4907008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935806 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:39.321097+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 4907008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:40.321213+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 4898816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:41.321305+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 4898816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:42.321412+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 4890624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:43.321512+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 4890624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937318 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:44.321617+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4882432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:45.321726+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4882432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:46.321931+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4882432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:47.322028+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 4874240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:48.322154+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 4874240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937450 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:49.322284+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 4866048 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:50.322408+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 4866048 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:51.322513+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.760926247s of 15.767518044s, submitted: 3
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 4833280 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:52.322630+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 4833280 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:53.322739+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 4833280 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937318 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:54.322858+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 4825088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:55.322964+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 4825088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:56.323098+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 4808704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:57.323203+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 4808704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:58.323297+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 4808704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936727 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:37:59.323394+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 4800512 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:00.323491+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 4800512 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:01.323592+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 4784128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:02.323715+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.436513901s of 10.439648628s, submitted: 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 4784128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:03.323827+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 4784128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936595 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:04.323937+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 4775936 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:05.324050+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 4775936 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:06.324142+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 4775936 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:07.324234+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4767744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:08.324385+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4767744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936595 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:09.324485+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 4759552 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:10.324592+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 5210112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:11.324685+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 5210112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:12.324780+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 5201920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:13.324932+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 5201920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936595 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:14.325022+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 5193728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:15.325110+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 5185536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:16.325220+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 5185536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:17.325315+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 5177344 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:18.325417+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 5169152 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936595 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:19.325542+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 5169152 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:20.325640+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 5160960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:21.325783+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 5152768 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:22.325980+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 5144576 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:23.326162+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 5144576 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936595 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:24.326330+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 5136384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:25.326459+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 5136384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:26.326573+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 5136384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:27.326681+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 5128192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:28.326826+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 5128192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936595 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:29.327105+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 5120000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:30.327210+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c45c00 session 0x564fb3cd1a40
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c44000 session 0x564fb30ae960
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 5111808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:31.327367+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 5103616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:32.327566+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 5103616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:33.327699+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 5103616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936595 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:34.327847+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 5095424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:35.327978+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 5095424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:36.328064+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 5087232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:37.328164+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 5087232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:38.328291+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 5087232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936595 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:39.328389+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 5070848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:40.328493+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c44000
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.600059509s of 38.601051331s, submitted: 1
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 5070848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:41.328591+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 5070848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:42.328737+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 5062656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:43.328849+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 5062656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938239 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:44.328974+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 5054464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:45.329126+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 5054464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:46.329221+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 5046272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:47.329315+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 5046272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:48.329424+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 5046272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938239 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:49.329537+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 5038080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:50.329657+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 5038080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:51.329795+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 5029888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:52.329952+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 5029888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:53.330087+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 5021696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938239 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:54.330214+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 5021696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:55.330355+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 5013504 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:56.330490+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.999441147s of 16.002183914s, submitted: 2
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 5005312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:57.330583+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 5005312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:58.330680+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 5005312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938107 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:59.330782+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4997120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:00.330933+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4988928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:01.331050+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4988928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:02.331180+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4988928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:03.331304+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 4980736 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938107 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:04.331439+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 4972544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:05.331560+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 4972544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:06.331685+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 4964352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:07.331837+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 4964352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:08.331928+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 4956160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938107 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:09.332017+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 4956160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:10.332106+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 4947968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:11.332192+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 4947968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:12.332300+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 4947968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:13.332402+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 4939776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938107 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:14.332533+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c44000 session 0x564fb5afef00
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:15.332660+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 4931584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:16.333472+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 4923392 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:17.333596+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4915200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:18.333717+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4915200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:19.333818+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 4907008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938107 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:20.333948+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 4907008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:21.334049+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 4898816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:22.334169+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 4898816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:23.334289+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 4898816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:24.334424+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 4890624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938107 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.253587723s of 28.255268097s, submitted: 1
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:25.334532+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 4890624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:26.334639+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4882432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:27.334748+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4882432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:28.334847+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4882432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:29.334981+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 4874240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939751 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:30.335089+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 4874240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:31.335181+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 4866048 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:32.335363+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 4866048 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:33.335584+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 4857856 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:34.335690+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 4857856 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939160 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:35.335824+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 4849664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:36.335919+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 4841472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:37.336074+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 4841472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:38.336165+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 4841472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:39.336267+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 4833280 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939160 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:40.336361+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 4825088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.733262062s of 15.736958504s, submitted: 3
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:41.336527+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 4808704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:42.336639+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 4808704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:43.336739+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 4800512 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:44.336849+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 4800512 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939028 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:45.337008+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 4800512 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:46.337110+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 4792320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:47.337243+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 4792320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:48.337364+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 4792320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:49.337469+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 4784128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939028 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:50.337588+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 4775936 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:51.337679+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4767744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:52.337945+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4767744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:53.338047+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4767744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:54.338148+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 4759552 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939028 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:55.338245+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 4759552 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:56.338341+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 4751360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:57.338441+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 4751360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:58.338546+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 4751360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:59.338654+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 4743168 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939028 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:00.338750+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 4743168 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:01.338858+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 4726784 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:02.338941+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 4726784 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:03.339042+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 4726784 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501a400 session 0x564fb6126d20
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:04.339149+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 4718592 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939028 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:05.339252+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 4718592 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:06.339354+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4710400 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:07.339480+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4710400 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:08.339591+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4710400 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:09.339703+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4702208 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939028 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:10.339822+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4702208 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:11.339925+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4710400 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:12.340034+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4702208 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:13.340143+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4702208 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb4219c00
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.208274841s of 33.209514618s, submitted: 1
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:14.340244+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 4694016 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939160 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:15.340339+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 4677632 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:16.340439+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 4669440 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:17.340551+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 4669440 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:18.340688+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 4661248 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:19.340834+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 4661248 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939160 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb4fc9c00
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:20.340943+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 4661248 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:21.341079+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 4653056 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:22.341235+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 4653056 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:23.341341+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 4644864 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:24.341410+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 4644864 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940081 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:25.341507+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 4644864 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:26.341604+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 4636672 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:27.341716+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 4628480 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:28.341817+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 4628480 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:29.342246+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 4620288 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940081 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:30.342404+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 4620288 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.985671997s of 16.989835739s, submitted: 3
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:31.342507+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 4620288 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:32.342632+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 4612096 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:33.342720+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 4612096 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:34.342808+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 4612096 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939949 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:35.342916+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 4603904 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:36.343005+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 4603904 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:37.343098+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 4595712 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:38.343187+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 4595712 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:39.343286+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 4595712 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939949 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:40.343379+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4587520 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:41.343527+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4587520 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:42.343693+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4587520 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:43.343823+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4579328 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:44.343935+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4579328 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939949 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:45.344096+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4571136 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:46.344226+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4562944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:47.344352+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4562944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:48.344482+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 4554752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb4219c00 session 0x564fb3f943c0
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb4fc9c00 session 0x564fb3cd2960
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:49.344583+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 4554752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939949 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:50.344706+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4538368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:51.344821+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4538368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:52.344952+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4538368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:53.345050+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4530176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:54.345157+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4530176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939949 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:55.345273+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4521984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:56.345371+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4521984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:57.345466+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4521984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:58.345592+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4513792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:59.345715+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4513792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939949 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b400
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.493139267s of 28.494064331s, submitted: 1
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:00.345819+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4513792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:01.345937+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 4505600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:02.346062+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 4505600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b800
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:03.346157+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4489216 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:04.346272+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4489216 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943105 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:05.346406+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4489216 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:06.346526+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 4481024 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:07.346640+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 4481024 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:08.346732+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 4472832 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:09.346835+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 4472832 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943105 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:10.346940+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 4472832 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:11.347043+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 4456448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.036651611s of 12.040016174s, submitted: 3
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:12.347164+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 4456448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:13.347259+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 4448256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:14.347419+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 4440064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942382 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:15.347515+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 4423680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:16.347616+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 4415488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:17.347712+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 4415488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:18.347860+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 4407296 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:19.347969+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 4407296 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942382 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:20.348059+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 4407296 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:21.348164+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 4399104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3b7b400 session 0x564fb3d7e960
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501a800 session 0x564fb6550d20
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:22.348292+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 4399104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:23.348376+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 4390912 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:24.348489+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 4390912 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942382 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:25.348594+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 4382720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:26.348694+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 4382720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:27.348799+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 4382720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:28.348931+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 4382720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:29.349092+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 4374528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942382 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:30.349191+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 4374528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:31.349299+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 4358144 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:32.349551+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 4358144 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:33.349659+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 4349952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:34.349770+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:04 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 4349952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942382 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:04 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:35.349861+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4341760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:36.349928+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4341760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:37.350082+0000)
Nov 25 10:03:04 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4341760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:04 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:38.350224+0000)
Nov 25 10:03:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 25 10:03:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2822567927' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 4333568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c44000
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.160346985s of 27.163261414s, submitted: 2
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:39.350361+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 4333568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942514 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:40.350474+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 4325376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:41.350578+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 4325376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:42.350716+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 4325376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:43.350868+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 4317184 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:44.350936+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 4317184 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944026 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb4219c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:45.351038+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 4300800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:46.351154+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 4300800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:47.351265+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:48.351368+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501b800 session 0x564fb3d7e780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c45c00 session 0x564fb3d7e1e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:49.351481+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944026 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.999965668s of 11.002209663s, submitted: 2
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:50.351614+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:51.351746+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:52.351879+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 4284416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:53.351997+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 4300800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:54.352098+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 4300800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943303 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:55.352215+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:56.352307+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 4284416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:57.352403+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 4276224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8035 writes, 32K keys, 8035 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 8035 writes, 1713 syncs, 4.69 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8035 writes, 32K keys, 8035 commit groups, 1.0 writes per commit group, ingest: 20.90 MB, 0.03 MB/s
                                           Interval WAL: 8035 writes, 1713 syncs, 4.69 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:58.352502+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 4218880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:59.352640+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 4202496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943303 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:00.352757+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 4202496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:01.352851+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 4202496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:02.352925+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 4194304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.933315277s of 12.937394142s, submitted: 3
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:03.353034+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 4194304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:04.353134+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 4186112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944947 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:05.353228+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 4186112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:06.353322+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 4186112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:07.353415+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 4177920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:08.353558+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 4177920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:09.353676+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 4169728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945868 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:10.353815+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 4169728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:11.353920+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 4169728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:12.354045+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 4161536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:13.354139+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 4161536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:14.354303+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 4153344 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945868 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.422577858s of 12.426486015s, submitted: 3
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:15.354424+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 4153344 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:16.354576+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 4145152 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:17.354663+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 4136960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:18.354759+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 4136960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:19.354871+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 4120576 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:20.354999+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 4120576 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:21.355150+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 4112384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:22.355309+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 4112384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:23.355447+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 4104192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:24.355592+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 4104192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:25.355717+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 4104192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:26.355868+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 4096000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:27.355940+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 4096000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:28.356056+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 4087808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:29.356157+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 4079616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:30.356269+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 4079616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:31.356364+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:32.356466+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:33.356599+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4063232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:34.356743+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4063232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:35.356835+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4063232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:36.356952+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 4055040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:37.357066+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 4055040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:38.357239+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 4046848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:39.357356+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 4046848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:40.357491+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 4038656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:41.357600+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 4030464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:42.357934+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 4030464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:43.358031+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 4022272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:44.358129+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 4022272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:45.358236+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 4014080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:46.358349+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 4014080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:47.358456+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4005888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:48.358640+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4005888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:49.358764+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4005888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:50.358887+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 3997696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:51.358964+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 3997696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:52.359132+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 3989504 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:53.359242+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 3989504 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:54.359342+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 3981312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:55.359431+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 3973120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:56.359520+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 3973120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:57.359929+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 3973120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:58.360038+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 3964928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:59.360128+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 3964928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:00.360222+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 3956736 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:01.360322+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 3948544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:02.360445+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 3948544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:03.360560+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 3940352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:04.360675+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 3940352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:05.360786+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 3932160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:06.360885+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 3932160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:07.361020+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 3932160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 52.612483978s of 52.613750458s, submitted: 1
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:08.361114+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3538944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:09.361231+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3538944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:10.361329+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3538944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:11.361434+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3538944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:12.361594+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3538944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:13.361718+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3538944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:14.361814+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:15.361917+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:16.362017+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:17.362126+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:18.362215+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:19.362328+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:20.362467+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:21.362612+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:22.362753+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:23.362877+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:24.362925+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:25.363024+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:26.363131+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:27.363222+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:28.363363+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:29.363490+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:30.363606+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:31.363714+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:32.363827+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:33.363938+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:34.364038+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:35.364136+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:36.364231+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:37.364338+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:38.364436+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3514368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:39.364537+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3514368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:40.364965+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3514368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:41.365054+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3506176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:42.365203+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3506176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:43.365325+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3506176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:44.365448+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3497984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:45.365593+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3497984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:46.365688+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 3489792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:47.365847+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 3489792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:48.365925+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3481600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:49.366012+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3481600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:50.366104+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 3473408 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:51.366203+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 3473408 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:52.366314+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 3473408 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:53.366413+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3448832 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:54.366511+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3448832 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:55.366683+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3448832 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:56.366777+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3440640 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:57.366886+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3440640 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:58.367007+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3440640 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:59.367143+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3440640 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:00.367311+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:01.367412+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:02.367526+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:03.367632+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:04.367725+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:05.367886+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:06.368018+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:07.368118+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:08.368221+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:09.368322+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:10.368430+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:11.368526+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:12.368645+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:13.368734+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:14.368827+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:15.368934+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:16.369028+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:17.369138+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:18.369252+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:19.369377+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:20.369468+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:21.369564+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:22.369671+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3424256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:23.369842+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3424256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:24.369945+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3424256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:25.370043+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3424256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:26.370147+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3424256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:27.370260+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3424256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:28.370363+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3416064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:29.370475+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3416064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:30.370590+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3416064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:31.370684+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3416064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:32.370818+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3416064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:33.370941+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:34.371068+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:35.371172+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:36.371278+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb4219c00 session 0x564fb3cf8780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c44000 session 0x564fb6550000
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:37.371374+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:38.371476+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:39.371826+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:40.371934+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:41.372029+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:42.372150+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:43.372259+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:44.372351+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:45.372454+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:46.372565+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 98.840316772s of 99.029678345s, submitted: 354
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 3399680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:47.372699+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 3399680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:48.372799+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 3399680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:49.372929+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 3399680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945868 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:50.373034+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 3391488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:51.373146+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 3391488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:52.373274+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:53.373379+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:54.373508+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947380 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:55.373631+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:56.373745+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:57.373865+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:58.373926+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:59.374023+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946198 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:00.374128+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:01.374220+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:02.374393+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:03.374498+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.987428665s of 16.990921021s, submitted: 4
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:04.374640+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:05.374785+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:06.374900+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:07.375001+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:08.375113+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:09.375215+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:10.375315+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:11.375417+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:12.375528+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:13.375633+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:14.375756+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3366912 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:15.375868+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3366912 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:16.375967+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:17.376063+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:18.376165+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:19.376270+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:20.376391+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:21.376490+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:22.376631+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:23.376722+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:24.376812+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:25.376923+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:26.377012+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:27.377138+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:28.377247+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:29.377358+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:30.377491+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:31.377577+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:32.377676+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:33.377775+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:34.377875+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:35.377932+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:36.378019+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:37.378123+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:38.378290+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:39.378405+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:40.378503+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:41.378592+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:42.378697+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:43.378820+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:44.379149+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:45.379247+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:46.379339+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:47.379443+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:48.379546+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:49.379688+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:50.379790+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:51.379936+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:52.380070+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:53.380188+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:54.380303+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:55.380414+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:56.380513+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:57.380611+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:58.380712+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:59.380812+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:00.380960+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:01.381046+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:02.381147+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:03.381241+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:04.381342+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:05.382877+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:06.382946+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501a800 session 0x564fb31a9c20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c45c00 session 0x564fb31a90e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:07.383061+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:08.383166+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:09.383288+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:10.383403+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 3342336 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:11.384005+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 3342336 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:12.384165+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 3342336 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:13.384272+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 3342336 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:14.384384+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:15.384512+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:16.384654+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 73.012329102s of 73.013580322s, submitted: 1
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:17.384826+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:18.384935+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:19.385045+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946198 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:20.385143+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:21.385346+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:22.385466+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:23.385573+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:24.385680+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946198 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:25.385784+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:26.385960+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:27.386123+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb322d400 session 0x564fb31a85a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501b400 session 0x564fb31a9860
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:28.386226+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.043398857s of 12.045754433s, submitted: 1
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:29.386330+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945607 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:30.386430+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:31.386521+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:32.386626+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:33.386748+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:34.386858+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:35.386924+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945475 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:36.387014+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:37.387108+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:38.387272+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:39.387377+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:40.387480+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945607 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:41.387574+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:42.387703+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:43.387796+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:44.387950+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:45.388099+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945607 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:46.388191+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:47.388290+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:48.388393+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:49.388487+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:50.388582+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945607 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:51.388677+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:52.388791+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb403c400 session 0x564fb32d7e00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501b800 session 0x564fb3d7e000
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:53.388884+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:54.388992+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.741083145s of 25.744991302s, submitted: 3
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:55.389097+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945475 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:56.389188+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:57.389281+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:58.389407+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:59.389568+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:00.389723+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945475 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:01.389828+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:02.390012+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 3293184 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:03.390114+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 3293184 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:04.390223+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 3293184 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:05.390390+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 3293184 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945607 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.234606743s of 11.236687660s, submitted: 2
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:06.390491+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:07.390612+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:08.390707+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:09.390821+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:10.390930+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948631 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:11.391037+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:12.391166+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:13.391270+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:14.391365+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:15.391455+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948631 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:16.391549+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.965599060s of 10.967306137s, submitted: 2
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:17.391647+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:18.391708+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:19.391814+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:20.391924+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948499 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb403c400 session 0x564fb538ed20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:21.392013+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:22.392118+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:23.392243+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:24.392359+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:25.392464+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948499 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:26.392560+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:27.392653+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:28.392729+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 3268608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:29.392862+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 3268608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:30.393005+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 3268608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948499 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.246407509s of 14.247385979s, submitted: 1
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:31.393101+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:32.393242+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:33.393351+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:34.393456+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:35.393557+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950143 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:36.393669+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:37.393771+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:38.393976+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:39.394095+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:40.394189+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948961 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:41.394279+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:42.394395+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:43.394495+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:44.394600+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:45.394717+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948961 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:46.394806+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:47.394917+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.738798141s of 16.743886948s, submitted: 4
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:48.395076+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3244032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:49.395207+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3244032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:50.395328+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3244032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:51.395462+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3235840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:52.395594+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:53.395703+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:54.395813+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:55.395920+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:56.396045+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:57.396137+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:58.396234+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:59.396333+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:00.396427+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:01.396581+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:02.396726+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:03.396863+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:04.397002+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:05.397136+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:06.397300+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:07.397469+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:08.397635+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:09.397769+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:10.397951+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:11.398075+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:12.398236+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb403d800 session 0x564fb3d80960
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501a400 session 0x564fb5afeb40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:13.398350+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:14.398485+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:15.398611+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:16.398710+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:17.398801+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:18.398921+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:19.399027+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:20.399148+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:21.399261+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:22.399395+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:23.399731+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.605865479s of 35.606979370s, submitted: 1
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:24.399964+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:25.400107+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948961 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:26.400298+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:27.400461+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:28.400556+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:29.400665+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:30.400793+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948961 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:31.400887+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:32.401019+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:33.401136+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:34.401244+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:35.401461+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948370 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:36.401617+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:37.401752+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:38.401868+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:39.402013+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.297077179s of 16.299779892s, submitted: 2
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:40.402158+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:41.402282+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:42.402441+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:43.402579+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:44.402696+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:45.402839+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:46.402943+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:47.403108+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:48.403245+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:49.403381+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:50.403478+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:51.403580+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:52.403720+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:53.403841+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:54.403993+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:55.404138+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:56.404267+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:57.404408+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:58.404538+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:59.404675+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:00.404847+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:01.404932+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:02.405034+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:03.405126+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:04.405231+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:05.405329+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:06.405437+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:07.405563+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:08.405696+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:09.405787+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:10.405942+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:11.406106+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:12.406276+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:13.406429+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:14.406553+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:15.406683+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3203072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: mgrc ms_handle_reset ms_handle_reset con 0x564fb403cc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/92811439
Nov 25 10:03:05 compute-0 ceph-osd[82261]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/92811439,v1:192.168.122.100:6801/92811439]
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: get_auth_request con 0x564fb3c44000 auth_method 0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: mgrc handle_mgr_configure stats_period=5
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:16.406836+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:17.407068+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:18.407227+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:19.407396+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:20.407563+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:21.407726+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:22.407868+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c45c00 session 0x564fb60d03c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:23.408049+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:24.408203+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb322d400 session 0x564fb32d92c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb403dc00 session 0x564fb4f930e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:25.408363+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:26.408526+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:27.408653+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:28.408773+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:29.408929+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:30.409104+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:31.409200+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:32.409350+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.340190887s of 53.341789246s, submitted: 1
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:33.409479+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:34.409605+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:35.409719+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948502 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:36.409857+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:37.409992+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:38.410135+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501a800 session 0x564fb6443a40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:39.410241+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:40.410381+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950014 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:41.410515+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:42.410688+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:43.410887+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:44.411068+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:45.411229+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950014 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:46.411392+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:47.411523+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:48.411673+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.570524216s of 15.574358940s, submitted: 3
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:49.411805+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:50.411936+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950014 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:51.412063+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:52.412227+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:53.412338+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:54.412481+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:55.412659+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951394 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:56.412785+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:57.412943+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:58.413109+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.126813889s of 10.132299423s, submitted: 4
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:59.413256+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:00.413391+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950803 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:01.413507+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:02.413680+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:03.413823+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:04.413953+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:05.414122+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950671 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:06.414247+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:07.414355+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:08.414453+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:09.414564+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:10.414679+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950671 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:11.414820+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:12.415007+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:13.415159+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:14.415330+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:15.415490+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950671 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:16.415620+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:17.415731+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:18.415862+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:19.415967+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:20.416067+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950671 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:21.416177+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb403d800 session 0x564fb54bc5a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:22.416286+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:23.416383+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:24.416503+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:25.416620+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950671 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:26.416751+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:27.416886+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:28.417029+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:29.417173+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:30.417310+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950671 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:31.417428+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:32.417606+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.853954315s of 33.855854034s, submitted: 2
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:33.417783+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:34.417878+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:35.418004+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952315 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:36.418132+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:37.418253+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:38.418413+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:39.418529+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:40.418664+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953827 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:41.418807+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:42.418942+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:43.419053+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:44.419198+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:45.419356+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953236 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:46.419489+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:47.419645+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:48.419794+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:49.420319+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.244781494s of 17.250627518s, submitted: 4
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:50.420496+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953104 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:51.420606+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:52.420785+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:53.420942+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:54.421084+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:55.421201+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953104 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:56.421362+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:57.421460+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:58.421574+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:59.421725+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:00.421873+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953104 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:01.422029+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:02.422191+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:03.422327+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:04.422465+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:05.422602+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953104 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:06.422742+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:07.422886+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:08.423026+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:09.423143+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:10.423250+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953104 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:11.423343+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:12.423465+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:13.423591+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:14.423785+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:15.423946+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953104 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:16.424082+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:17.424252+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:18.424342+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:19.424445+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.797666550s of 29.798833847s, submitted: 1
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 137 ms_handle_reset con 0x564fb403d800 session 0x564fb55dd680
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 137 ms_handle_reset con 0x564fb322d400 session 0x564fb61274a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:20.424546+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961785 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:21.424642+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 139 ms_handle_reset con 0x564fb403dc00 session 0x564fb3ccfc20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fc667000/0x0/0x4ffc00000, data 0xfbdaf/0x1a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:22.424757+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fc667000/0x0/0x4ffc00000, data 0xfbdaf/0x1a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 140 ms_handle_reset con 0x564fb501a800 session 0x564fb55ade00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x8fdeda/0x9a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87228416 unmapped: 18644992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:23.424954+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87244800 unmapped: 18628608 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:24.425110+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:25.425246+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027532 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:26.425348+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:27.425463+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:28.425581+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:29.425690+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:30.425814+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 ms_handle_reset con 0x564fb501a400 session 0x564fb345fa40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 ms_handle_reset con 0x564fb403c400 session 0x564fb60d10e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.923460960s of 10.963563919s, submitted: 66
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027664 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:31.425945+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:32.426057+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:33.426192+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:34.426251+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:35.426378+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027664 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:36.426481+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:37.426607+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:38.426712+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:39.426816+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:40.426975+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.443756104s of 10.445528984s, submitted: 1
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024008 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:41.427143+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:42.427326+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:43.427463+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:44.427577+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5e000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 ms_handle_reset con 0x564fb3c45c00 session 0x564fb663c3c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 ms_handle_reset con 0x564fb501b800 session 0x564fb60d12c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:45.427707+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024008 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:46.427814+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 88317952 unmapped: 17555456 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:47.427976+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:48.428115+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:49.428219+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5e000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:50.428312+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024797 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:51.428408+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:52.428553+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5e000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:53.428677+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:54.428781+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:55.428908+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.355128288s of 14.359399796s, submitted: 4
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024929 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:56.428994+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:57.429099+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5e000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 18595840 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 8934 writes, 34K keys, 8934 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 8934 writes, 2144 syncs, 4.17 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 899 writes, 1569 keys, 899 commit groups, 1.0 writes per commit group, ingest: 0.68 MB, 0.00 MB/s
                                           Interval WAL: 899 writes, 431 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:58.429238+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 18563072 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:59.430004+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5e000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 18563072 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:00.430150+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 18554880 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028563 data_alloc: 218103808 data_used: 163840
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:01.430254+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 143 ms_handle_reset con 0x564fb3c45800 session 0x564fb5e84000
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 17465344 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:02.430374+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 17465344 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:03.430485+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 17465344 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:04.430649+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 143 ms_handle_reset con 0x564fb6009400 session 0x564fb5e843c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 17457152 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb827000/0x0/0x4ffc00000, data 0xf351e0/0xfe3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:05.430753+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 90062848 unmapped: 15810560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094708 data_alloc: 218103808 data_used: 1814528
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:06.430848+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 11427840 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:07.430934+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.036621094s of 12.066541672s, submitted: 33
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 11476992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:08.431032+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 11476992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:09.431139+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb825000/0x0/0x4ffc00000, data 0xf371b2/0xfe6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 11567104 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:10.431236+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 11567104 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129367 data_alloc: 218103808 data_used: 6565888
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:11.431346+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 11567104 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:12.431492+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 11567104 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:13.431587+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 11567104 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:14.431684+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb825000/0x0/0x4ffc00000, data 0xf371b2/0xfe6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94322688 unmapped: 11550720 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:15.431791+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102203392 unmapped: 3670016 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188577 data_alloc: 218103808 data_used: 7290880
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:16.431907+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb825000/0x0/0x4ffc00000, data 0xf371b2/0xfe6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104456192 unmapped: 1417216 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:17.431999+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104456192 unmapped: 1417216 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:18.432114+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104456192 unmapped: 1417216 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:19.432777+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 1384448 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:20.432925+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 1253376 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200525 data_alloc: 218103808 data_used: 7475200
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:21.433025+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 1253376 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:22.433216+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 1253376 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:23.433880+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 1253376 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:24.434006+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:25.434120+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201133 data_alloc: 218103808 data_used: 7536640
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:26.434234+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:27.434355+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:28.434449+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:29.434552+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:30.434706+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201133 data_alloc: 218103808 data_used: 7536640
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:31.434822+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:32.434938+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:33.435044+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb56552c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403c400 session 0x564fb3d7f680
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b800 session 0x564fb32d81e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:34.435191+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb5e84b40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.712247849s of 27.780412674s, submitted: 108
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:35.435280+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009800 session 0x564fb64592c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb54bc960
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403c400 session 0x564fb4f94d20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b800 session 0x564fb3cd2d20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb32d4780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 18677760 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275793 data_alloc: 218103808 data_used: 7536640
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:36.435408+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93a0000/0x0/0x4ffc00000, data 0x221c1c2/0x22cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 18677760 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:37.435508+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93a0000/0x0/0x4ffc00000, data 0x221c1c2/0x22cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 18677760 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:38.435625+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 18677760 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:39.435733+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 18677760 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:40.435854+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6008400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 18677760 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275793 data_alloc: 218103808 data_used: 7536640
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:41.435956+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 12337152 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:42.436068+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93a0000/0x0/0x4ffc00000, data 0x221c1c2/0x22cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:43.436172+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:44.436317+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:45.436496+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355745 data_alloc: 234881024 data_used: 19369984
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:46.436656+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.959013939s of 11.979373932s, submitted: 12
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:47.437015+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93a0000/0x0/0x4ffc00000, data 0x221c1c2/0x22cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:48.437192+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:49.437341+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:50.437487+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 7520256 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356017 data_alloc: 234881024 data_used: 19394560
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:51.437585+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8a47000/0x0/0x4ffc00000, data 0x2b671c2/0x2c17000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 2236416 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:52.438633+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f89ff000/0x0/0x4ffc00000, data 0x2b961c2/0x2c46000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 2023424 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:53.438722+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 2023424 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:54.438852+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f89ff000/0x0/0x4ffc00000, data 0x2b961c2/0x2c46000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 1990656 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:55.438972+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 1990656 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1440081 data_alloc: 234881024 data_used: 20099072
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:56.439099+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 1990656 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.975911140s of 10.039819717s, submitted: 109
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:57.439230+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 4210688 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:58.439362+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8a23000/0x0/0x4ffc00000, data 0x2b991c2/0x2c49000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 4210688 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:59.439495+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 4210688 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:00.439615+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 4210688 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430841 data_alloc: 234881024 data_used: 20099072
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:01.439710+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8a23000/0x0/0x4ffc00000, data 0x2b991c2/0x2c49000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6008400 session 0x564fb5654780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 4210688 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:02.439870+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8a23000/0x0/0x4ffc00000, data 0x2b991c2/0x2c49000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb54bcd20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 13402112 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:03.439977+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 13402112 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:04.440081+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 13402112 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:05.440196+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 13402112 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201687 data_alloc: 218103808 data_used: 7536640
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:06.440322+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 13402112 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:07.440487+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45c00 session 0x564fb6668f00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.509199142s of 10.522704124s, submitted: 30
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009400 session 0x564fb55eab40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403c400 session 0x564fb5e84f00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103309312 unmapped: 18849792 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:08.440590+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f0f000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:09.440706+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:10.440831+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052351 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:11.440994+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:12.441126+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:13.441271+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:14.441391+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:15.441560+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052351 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:16.441720+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:17.441857+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:18.441985+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:19.442122+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:20.442248+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb3d7e000
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb322d400 session 0x564fb341de00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:21.442342+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052351 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:22.442456+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:23.442859+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:24.443026+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 19439616 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:25.443150+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 19439616 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:26.443310+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 19439616 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052351 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:27.443460+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 19439616 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb5e854a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb64434a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45c00 session 0x564fb5e85c20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:28.443541+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 19439616 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403c400 session 0x564fb5e85e00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.717388153s of 20.917802811s, submitted: 378
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb322d400 session 0x564fb30af4a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb30ae1e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45c00 session 0x564fb51423c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb5143860
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009400 session 0x564fb341c780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:29.443709+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 20619264 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:30.443816+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 20619264 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:31.443986+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 20619264 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066393 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:32.444135+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 20611072 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fab9a000/0x0/0x4ffc00000, data 0xa221c2/0xad2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:33.444266+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 20611072 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:34.444408+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 20611072 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:35.444572+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 20611072 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:36.444751+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 20324352 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067505 data_alloc: 218103808 data_used: 200704
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:37.444912+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 20324352 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fab9a000/0x0/0x4ffc00000, data 0xa221c2/0xad2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:38.445058+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 20324352 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:39.445212+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 20324352 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.470705032s of 11.479301453s, submitted: 10
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:40.445345+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 20324352 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:41.445475+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 20324352 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066914 data_alloc: 218103808 data_used: 200704
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fab9a000/0x0/0x4ffc00000, data 0xa221c2/0xad2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:42.445653+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 20316160 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:43.445778+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 20316160 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fab9a000/0x0/0x4ffc00000, data 0xa221c2/0xad2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:44.445935+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 20316160 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:45.446071+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 18931712 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:46.446191+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098920 data_alloc: 218103808 data_used: 331776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:47.446388+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:48.446519+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:49.446615+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:50.446760+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:51.446869+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098920 data_alloc: 218103808 data_used: 331776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:52.447044+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:53.447214+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:54.447338+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:55.447481+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:56.447585+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098920 data_alloc: 218103808 data_used: 331776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a400 session 0x564fb65510e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb663dc20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:57.447729+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:58.447935+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:59.448066+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:00.448162+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:01.448287+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098920 data_alloc: 218103808 data_used: 331776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:02.448466+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:03.448599+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:04.448731+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:05.448861+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 19488768 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:06.448992+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 19488768 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098920 data_alloc: 218103808 data_used: 331776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.047389984s of 27.079875946s, submitted: 43
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:07.449106+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 19488768 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb3cd3c20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb538ed20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:08.449256+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 20299776 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:09.449396+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 20299776 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:10.449505+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 20283392 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:11.449659+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 20283392 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058782 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:12.449816+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 20283392 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:13.449922+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 20283392 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:14.450062+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 20275200 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:15.450256+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 20275200 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:16.450366+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 20275200 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059703 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:17.450518+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 20275200 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:18.450640+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 20275200 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:19.450785+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 20275200 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:20.450942+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.764374733s of 13.775873184s, submitted: 10
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:21.451075+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059571 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:22.451238+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:23.451372+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:24.451485+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:25.451590+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:26.451685+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059571 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6008400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6008400 session 0x564fb64581e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb5aff2c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb3e174a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb4fe9c20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a400 session 0x564fb51674a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:27.451789+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:28.451935+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:29.452092+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xb181b2/0xbc7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:30.452272+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xb181b2/0xbc7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:31.452434+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075612 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:32.452617+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:33.452735+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.655667305s of 12.667624474s, submitted: 12
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:34.452935+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:35.453093+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xb181b2/0xbc7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:36.453230+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090336 data_alloc: 218103808 data_used: 2330624
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:37.453346+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:38.453477+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:39.453609+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xb181b2/0xbc7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:40.453740+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:41.453848+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090336 data_alloc: 218103808 data_used: 2330624
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:42.454038+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:43.454196+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xb181b2/0xbc7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.119963646s of 10.137836456s, submitted: 20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:44.454319+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:45.454439+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:46.454609+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119484 data_alloc: 218103808 data_used: 2330624
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:47.454740+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:48.454929+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:49.455064+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:50.455164+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:51.455264+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119484 data_alloc: 218103808 data_used: 2330624
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:52.455384+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b800 session 0x564fb3cf9c20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45c00 session 0x564fb32d63c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:53.455523+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:54.455644+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:55.455766+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:56.455952+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119484 data_alloc: 218103808 data_used: 2330624
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:57.456120+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:58.456232+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:59.456356+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:00.456480+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45c00 session 0x564fb32d4d20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb32d4780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb66e9e00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb66e8780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.249073029s of 17.250936508s, submitted: 2
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a400 session 0x564fb66e8960
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a400 session 0x564fb6550f00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb6551a40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45c00 session 0x564fb6550b40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb6550d20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:01.456614+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182469 data_alloc: 218103808 data_used: 2330624
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:02.456769+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x16d4224/0x1785000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:03.456882+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:04.457059+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:05.457243+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6635c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:06.457369+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183145 data_alloc: 218103808 data_used: 2330624
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:07.457480+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:08.457954+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x16d4224/0x1785000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb60efc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:09.458066+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:10.458181+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:11.458295+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240145 data_alloc: 234881024 data_used: 10731520
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x16d4224/0x1785000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.196850777s of 11.221287727s, submitted: 28
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:12.458419+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x16d4224/0x1785000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:13.458556+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:14.458668+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:15.458767+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:16.458866+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238963 data_alloc: 234881024 data_used: 10731520
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb60efc00 session 0x564fb55ac780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb6550000
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:17.458933+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:18.459042+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x16d4224/0x1785000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:19.459144+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9049000/0x0/0x4ffc00000, data 0x2162224/0x2213000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111943680 unmapped: 27148288 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:20.459257+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112197632 unmapped: 26894336 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:21.459426+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112205824 unmapped: 26886144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338937 data_alloc: 234881024 data_used: 11501568
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9017000/0x0/0x4ffc00000, data 0x2194224/0x2245000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:22.459551+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112205824 unmapped: 26886144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:23.459683+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112205824 unmapped: 26886144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:24.459818+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112205824 unmapped: 26886144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.927748680s of 13.000759125s, submitted: 104
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:25.459913+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9017000/0x0/0x4ffc00000, data 0x2194224/0x2245000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:26.460005+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 27246592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334049 data_alloc: 234881024 data_used: 11505664
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662a000
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:27.460108+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9014000/0x0/0x4ffc00000, data 0x2197224/0x2248000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111853568 unmapped: 27238400 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb4f95860
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662c400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:28.460221+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662c400 session 0x564fb5167680
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 34783232 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:29.460348+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 34783232 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:30.460494+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa2da000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 34783232 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb3d7e5a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:31.460611+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb8718000
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb8718000 session 0x564fb55ade00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079129 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:32.460765+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:33.460926+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:34.461048+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:35.461213+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:36.461344+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.006490707s of 11.051873207s, submitted: 52
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078538 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:37.461509+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:38.461681+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:39.461777+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:40.461928+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:41.462044+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078538 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:42.462187+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:43.462274+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:44.462440+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:45.462573+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:46.462705+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078406 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:47.462825+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:48.462971+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:49.463105+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:50.463231+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:51.463612+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078406 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:52.463724+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:53.463851+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:54.463984+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.814508438s of 18.817729950s, submitted: 2
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb6443a40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb4f94960
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662c400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662c400 session 0x564fb5affc20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb55eb680
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:55.464071+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb8718400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb8718400 session 0x564fb341d680
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102531072 unmapped: 36560896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:56.464227+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 36552704 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109100 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa591000/0x0/0x4ffc00000, data 0xc1b214/0xccb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:57.464376+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb5034d20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 36552704 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb3cd01e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:58.464498+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662c400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662c400 session 0x564fb5143c20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 36519936 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb4febe00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb8719000
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6008400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:59.464608+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 36519936 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:00.464816+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103391232 unmapped: 35700736 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa58f000/0x0/0x4ffc00000, data 0xc1b247/0xccd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:01.464920+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103391232 unmapped: 35700736 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133315 data_alloc: 218103808 data_used: 3379200
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:02.465039+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103391232 unmapped: 35700736 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:03.465180+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103391232 unmapped: 35700736 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:04.465773+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 35692544 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa58f000/0x0/0x4ffc00000, data 0xc1b247/0xccd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:05.465923+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 35692544 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:06.466057+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 35692544 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133315 data_alloc: 218103808 data_used: 3379200
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa58f000/0x0/0x4ffc00000, data 0xc1b247/0xccd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:07.466191+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 35692544 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa58f000/0x0/0x4ffc00000, data 0xc1b247/0xccd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:08.466302+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 35692544 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.611018181s of 13.643602371s, submitted: 41
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:09.466426+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 30982144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c37000/0x0/0x4ffc00000, data 0x1573247/0x1625000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:10.466504+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c37000/0x0/0x4ffc00000, data 0x1573247/0x1625000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 30982144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:11.467090+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 30982144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219309 data_alloc: 218103808 data_used: 4100096
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:12.467212+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108126208 unmapped: 30965760 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:13.467362+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108126208 unmapped: 30965760 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c37000/0x0/0x4ffc00000, data 0x1573247/0x1625000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:14.467506+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108126208 unmapped: 30965760 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:15.467652+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108126208 unmapped: 30965760 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:16.467792+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 31555584 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215741 data_alloc: 218103808 data_used: 4104192
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:17.467910+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 31555584 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:18.468001+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 31555584 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:19.468095+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c35000/0x0/0x4ffc00000, data 0x1575247/0x1627000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 31555584 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:20.468201+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 31555584 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:21.468299+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c35000/0x0/0x4ffc00000, data 0x1575247/0x1627000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.835337639s of 12.891619682s, submitted: 84
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215965 data_alloc: 218103808 data_used: 4104192
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:22.468468+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:23.468587+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c34000/0x0/0x4ffc00000, data 0x1576247/0x1628000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c34000/0x0/0x4ffc00000, data 0x1576247/0x1628000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:24.468705+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:25.468862+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:26.469029+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215965 data_alloc: 218103808 data_used: 4104192
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:27.469195+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:28.469338+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c32000/0x0/0x4ffc00000, data 0x1578247/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 31539200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:29.469460+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009800 session 0x564fb32d41e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 31137792 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:30.469564+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 31129600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:31.469666+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9013000/0x0/0x4ffc00000, data 0x2197247/0x2249000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 31129600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304295 data_alloc: 218103808 data_used: 4104192
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:32.469780+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6635c00 session 0x564fb4f94b40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb322d400 session 0x564fb341cd20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 31129600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:33.469960+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 31129600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:34.470070+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 31129600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:35.470175+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 31129600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:36.470281+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 22093824 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389567 data_alloc: 234881024 data_used: 16691200
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:37.470372+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9013000/0x0/0x4ffc00000, data 0x2197247/0x2249000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 22085632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:38.470460+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 22085632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:39.470555+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9013000/0x0/0x4ffc00000, data 0x2197247/0x2249000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 22052864 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:40.470703+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.671489716s of 18.688638687s, submitted: 13
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 22011904 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:41.470857+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 22011904 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389871 data_alloc: 234881024 data_used: 16691200
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:42.470997+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9011000/0x0/0x4ffc00000, data 0x2198247/0x224a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 118161408 unmapped: 20930560 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:43.471124+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21962752 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:44.471279+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 118243328 unmapped: 20848640 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:45.471444+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x26aa247/0x275c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120676352 unmapped: 18415616 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:46.471578+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120676352 unmapped: 18415616 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435597 data_alloc: 234881024 data_used: 17227776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:47.471728+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120676352 unmapped: 18415616 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:48.471881+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18407424 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8af0000/0x0/0x4ffc00000, data 0x26ba247/0x276c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:49.472010+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18407424 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:50.472138+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18407424 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:51.472225+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18407424 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435597 data_alloc: 234881024 data_used: 17227776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:52.472329+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8af0000/0x0/0x4ffc00000, data 0x26ba247/0x276c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 18399232 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:53.472424+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.152483940s of 13.195999146s, submitted: 58
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 18374656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:54.472600+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 18374656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:55.472757+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119939072 unmapped: 19152896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:56.472927+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119939072 unmapped: 19152896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431246 data_alloc: 234881024 data_used: 17227776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:57.473034+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aed000/0x0/0x4ffc00000, data 0x26bb247/0x276d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119939072 unmapped: 19152896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aed000/0x0/0x4ffc00000, data 0x26bb247/0x276d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:58.473126+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119939072 unmapped: 19152896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:59.473304+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119939072 unmapped: 19152896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:00.473437+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aed000/0x0/0x4ffc00000, data 0x26bb247/0x276d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119963648 unmapped: 19128320 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:01.473553+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119971840 unmapped: 19120128 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430778 data_alloc: 234881024 data_used: 17227776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aef000/0x0/0x4ffc00000, data 0x26bb247/0x276d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:02.473648+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119971840 unmapped: 19120128 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:03.473736+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119971840 unmapped: 19120128 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:04.473862+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.138096809s of 11.145630836s, submitted: 6
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 19111936 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aef000/0x0/0x4ffc00000, data 0x26bb247/0x276d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:05.473966+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aee000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 19111936 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:06.474062+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 19111936 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430970 data_alloc: 234881024 data_used: 17227776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:07.474161+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 19111936 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:08.474255+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 19103744 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:09.474326+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 19103744 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:10.474458+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 19103744 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:11.474597+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aec000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 19079168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431122 data_alloc: 234881024 data_used: 17227776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:12.474710+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 19079168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:13.474841+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 19079168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:14.475016+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aec000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 19079168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:15.475180+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aec000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 19079168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.510084152s of 11.513150215s, submitted: 3
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:16.475423+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431122 data_alloc: 234881024 data_used: 17227776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aee000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:17.475521+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:18.475637+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aee000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:19.475803+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aee000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:20.475972+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aee000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:21.476073+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431290 data_alloc: 234881024 data_used: 17227776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:22.476183+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:23.476323+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:24.476461+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 19054592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:25.476560+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 19054592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:26.476683+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26bd247/0x276f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 19054592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430978 data_alloc: 234881024 data_used: 17227776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:27.476792+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 19054592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:28.476930+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.267045021s of 12.273044586s, submitted: 5
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 19030016 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:29.477098+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 19030016 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26bd247/0x276f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:30.477241+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26bd247/0x276f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 19021824 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:31.477353+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 19021824 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431146 data_alloc: 234881024 data_used: 17227776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:32.477458+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:33.477582+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26bd247/0x276f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:34.477727+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:35.477881+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:36.478015+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431266 data_alloc: 234881024 data_used: 17227776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:37.478127+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26be247/0x2770000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:38.478217+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:39.478379+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.515620232s of 11.520205498s, submitted: 3
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:40.478472+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:41.478576+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26be247/0x2770000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431282 data_alloc: 234881024 data_used: 17227776
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:42.478716+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:43.478818+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:44.478976+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26be247/0x2770000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 19005440 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:45.479076+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120111104 unmapped: 18980864 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:46.479175+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb30ae960
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aec000/0x0/0x4ffc00000, data 0x26be247/0x2770000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662c400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662c400 session 0x564fb32d72c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 27246592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223518 data_alloc: 218103808 data_used: 4104192
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aec000/0x0/0x4ffc00000, data 0x26be247/0x2770000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:47.479275+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 27246592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:48.479415+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 27246592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:49.479967+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 27246592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:50.480109+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.488348007s of 10.496168137s, submitted: 12
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 27942912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:51.480202+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x157e247/0x1630000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 27942912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223534 data_alloc: 218103808 data_used: 4104192
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:52.480329+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb8719000 session 0x564fb64425a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6008400 session 0x564fb55eaf00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb322d400 session 0x564fb55ad4a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c2c000/0x0/0x4ffc00000, data 0x157e247/0x1630000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:53.480474+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:54.480651+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:55.480808+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:56.480940+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098915 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:57.481040+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:58.481170+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:59.481272+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb3ccf4a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009800 session 0x564fb663c780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:00.481370+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:01.481455+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098915 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:02.481567+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662a000 session 0x564fb4207860
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:03.481689+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:04.481852+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:05.481966+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:06.482093+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098915 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:07.482255+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:08.482390+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:09.482567+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:10.482787+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:11.482973+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098915 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:12.483197+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.798551559s of 21.831386566s, submitted: 48
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb322d400 session 0x564fb51661e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6008400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6008400 session 0x564fb4fe94a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009800 session 0x564fb32d4d20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb30ae960
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb3cf9a40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 31047680 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:13.483358+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:14.483514+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:15.483676+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:16.483840+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130144 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:17.484004+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:18.484144+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:19.484275+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:20.484395+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 31137792 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:21.484531+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 31137792 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151252 data_alloc: 218103808 data_used: 3313664
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:22.484684+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 31137792 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:23.484832+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 31137792 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:24.484982+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.434982300s of 12.449465752s, submitted: 17
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb3cd2960
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 31137792 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb8719000
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb8719000 session 0x564fb55ac780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:25.485148+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:26.485304+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101268 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:27.485420+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:28.485549+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:29.485709+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:30.485841+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:31.485984+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101004 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:32.486145+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb6550f00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb6550d20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb32d4960
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb32d41e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb32d4d20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb3cf83c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb3cef0e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb8719000
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb8719000 session 0x564fb538f4a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb538e1e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb66685a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b400 session 0x564fb55dd680
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:33.486260+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:34.486396+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:35.486534+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa698000/0x0/0x4ffc00000, data 0xb141c2/0xbc4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:36.486656+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb538e780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb42061e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 32784384 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120682 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:37.486762+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 32784384 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:38.486917+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb4207860
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.719092369s of 13.742123604s, submitted: 25
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb4206960
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 32448512 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:39.487036+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 32448512 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb381f5/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:40.487145+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 32448512 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:41.487263+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 32448512 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141881 data_alloc: 218103808 data_used: 2322432
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:42.487390+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 32448512 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:43.487508+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb55eaf00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b400 session 0x564fb55ac3c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb381f5/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb42074a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb381f5/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:44.487616+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:45.487733+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:46.487841+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107044 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:47.487944+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:48.488041+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:49.488143+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:50.488243+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:51.488350+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107044 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:52.488489+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:53.488624+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:54.488758+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:55.488848+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:56.488940+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107044 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:57.489042+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:58.489133+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:59.489249+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:00.489352+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.812772751s of 21.829757690s, submitted: 23
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:01.489458+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106912 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:02.489564+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb538e1e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb538e780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb54bde00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb3cef0e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:03.489656+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b400 session 0x564fb3cf9a40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b400 session 0x564fb3cf8960
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb3ccfc20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb4fe83c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb3ccf4a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 33005568 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:04.489750+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0xda81c1/0xe58000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 33005568 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:05.489875+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 33005568 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:06.489928+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb51430e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 33005568 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146351 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:07.490016+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb51432c0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0xda81c1/0xe58000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb5142f00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 33005568 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb5143c20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:08.490150+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106102784 unmapped: 32989184 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:09.490242+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0xda81f4/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:10.490334+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:11.490432+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184203 data_alloc: 218103808 data_used: 4931584
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:12.490549+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:13.490711+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:14.491026+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:15.491235+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0xda81f4/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:16.491625+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184203 data_alloc: 218103808 data_used: 4931584
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:17.491721+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0xda81f4/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:18.491832+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.024320602s of 18.049039841s, submitted: 28
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 27426816 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:19.491938+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:20.492034+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:21.492192+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ed0000/0x0/0x4ffc00000, data 0x12d21f4/0x1384000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ed0000/0x0/0x4ffc00000, data 0x12d21f4/0x1384000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235909 data_alloc: 218103808 data_used: 5124096
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:22.492310+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:23.492425+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:24.492554+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:25.492686+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ed0000/0x0/0x4ffc00000, data 0x12d21f4/0x1384000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:26.492775+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231621 data_alloc: 218103808 data_used: 5124096
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:27.492883+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ed5000/0x0/0x4ffc00000, data 0x12d51f4/0x1387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ed5000/0x0/0x4ffc00000, data 0x12d51f4/0x1387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:28.493028+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:29.493119+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.634371758s of 11.680276871s, submitted: 63
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:30.493270+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb5142d20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b400 session 0x564fb5e841e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb66e8780
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:31.493437+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:32.493588+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:33.493727+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:34.493868+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:35.494005+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:36.494127+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:37.494252+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:38.494390+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:39.494843+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:40.495012+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:41.495152+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:42.495334+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:43.495754+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:44.495883+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:45.496034+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:46.496195+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:47.496355+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:48.496459+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread fragmentation_score=0.000274 took=0.000032s
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:49.496558+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:50.496653+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:51.496755+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:52.496887+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:53.497035+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:54.497201+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:55.497302+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:56.497411+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:57.497571+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:58.497664+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:59.497817+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:00.497977+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:01.498064+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:02.498195+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:03.498324+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:04.498454+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:05.498573+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:06.498742+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:07.498868+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.998508453s of 38.033313751s, submitted: 53
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb55acf00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:08.498943+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108748800 unmapped: 30343168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:09.499050+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108748800 unmapped: 30343168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c1000/0x0/0x4ffc00000, data 0xdec1b2/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:10.499186+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 28655616 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:11.499300+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 28557312 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e4f000/0x0/0x4ffc00000, data 0x135e1b2/0x140d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c44c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:12.499435+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 28557312 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201943 data_alloc: 218103808 data_used: 172032
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:13.499587+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:14.499748+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:15.499923+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:16.500064+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e4f000/0x0/0x4ffc00000, data 0x135e1b2/0x140d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:17.500164+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228351 data_alloc: 218103808 data_used: 4169728
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e4d000/0x0/0x4ffc00000, data 0x13601b2/0x140f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:18.500300+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:19.500444+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:20.500578+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e4d000/0x0/0x4ffc00000, data 0x13601b2/0x140f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:21.500706+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.297528267s of 13.345145226s, submitted: 56
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:22.500821+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 26140672 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265043 data_alloc: 218103808 data_used: 4227072
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:23.500947+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 26140672 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:24.501075+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 26140672 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:25.501200+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 26140672 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:26.501320+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 26140672 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b65000/0x0/0x4ffc00000, data 0x16391b2/0x16e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:27.501454+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 26140672 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265267 data_alloc: 218103808 data_used: 4227072
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:28.501583+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:29.501734+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:30.501874+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b72000/0x0/0x4ffc00000, data 0x163b1b2/0x16ea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:31.502037+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b72000/0x0/0x4ffc00000, data 0x163b1b2/0x16ea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:32.502150+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259899 data_alloc: 218103808 data_used: 4227072
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:33.502309+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.009808540s of 12.043086052s, submitted: 54
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:34.502486+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b71000/0x0/0x4ffc00000, data 0x163c1b2/0x16eb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:35.502661+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:36.502798+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b71000/0x0/0x4ffc00000, data 0x163c1b2/0x16eb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:37.502944+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260123 data_alloc: 218103808 data_used: 4227072
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:38.503090+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c44c00 session 0x564fb52dc1e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662cc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662cc00 session 0x564fb3e174a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b71000/0x0/0x4ffc00000, data 0x163c1b2/0x16eb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:39.503257+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 27271168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:40.503352+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 27271168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:41.503477+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 27271168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb54bda40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb3cd3860
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:42.503627+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130238 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:43.503763+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:44.503916+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa334000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:45.504050+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:46.504155+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:47.504281+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130238 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:48.504397+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:49.504498+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa334000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:50.504602+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:51.504728+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:52.504833+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130238 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa334000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:53.504936+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:54.505047+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa334000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:55.505140+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:56.505236+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa334000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:57.505358+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130238 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:58.505454+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:59.505569+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa334000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c44c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.766538620s of 26.777446747s, submitted: 18
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:00.505668+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c44c00 session 0x564fb55ea000
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0xd9e1b2/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:01.505808+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:02.506012+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166848 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:03.506115+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb5afeb40
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:04.506217+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662cc00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662cc00 session 0x564fb32d7c20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:05.506330+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0xd9e1b2/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb64bc800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb64bc800 session 0x564fb54bd0e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb64bc800
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb64bc800 session 0x564fb55ddc20
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:06.506463+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:07.506561+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 27230208 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170648 data_alloc: 218103808 data_used: 696320
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:08.506676+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:09.506796+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0xd9e1b2/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:10.506925+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:11.507025+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:12.507191+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191776 data_alloc: 218103808 data_used: 3842048
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:13.507334+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:14.507461+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:15.507582+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0xd9e1b2/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:16.507717+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.093719482s of 16.104257584s, submitted: 10
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa391000/0x0/0x4ffc00000, data 0xe1c1b2/0xecb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [1,4])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 23453696 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:17.507839+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245506 data_alloc: 218103808 data_used: 4386816
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:18.508029+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:19.508162+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e22000/0x0/0x4ffc00000, data 0x138a1b2/0x1439000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:20.508321+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:21.508471+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:22.508627+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245506 data_alloc: 218103808 data_used: 4386816
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:23.508758+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e22000/0x0/0x4ffc00000, data 0x138a1b2/0x1439000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:24.508918+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:25.509031+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:26.509168+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:27.509272+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245506 data_alloc: 218103808 data_used: 4386816
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e22000/0x0/0x4ffc00000, data 0x138a1b2/0x1439000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:28.509380+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb55eb4a0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c44c00
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.580277443s of 12.613999367s, submitted: 56
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c44c00 session 0x564fb5bc01e0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:29.509531+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:30.509661+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:31.509811+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:32.509946+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:33.510123+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:34.510257+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:35.510439+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:36.510619+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:37.510731+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:38.510887+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:39.511068+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:40.511171+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:41.511305+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:42.511541+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:43.511676+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:44.511816+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:45.511957+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:46.512157+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:47.512350+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:48.512505+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:49.512673+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:50.512840+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:51.512993+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:52.513184+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:53.513335+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:54.513521+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:55.513637+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:56.513787+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:57.513936+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3470 syncs, 3.43 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2964 writes, 9767 keys, 2964 commit groups, 1.0 writes per commit group, ingest: 10.40 MB, 0.02 MB/s
                                           Interval WAL: 2964 writes, 1326 syncs, 2.24 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:58.514055+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:59.514166+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:00.514276+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:01.514383+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:02.514559+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:03.514738+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:04.514881+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:05.515084+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:06.515229+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:07.515384+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:08.515538+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:09.515664+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:10.515782+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:11.515929+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:12.516088+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:13.516234+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:14.516408+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:15.516530+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:16.516639+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:17.516775+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:18.516920+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:19.517122+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:20.517333+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:21.517428+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:22.517543+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:23.517665+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:24.517762+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:25.517861+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:26.517941+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 27811840 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:27.518038+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 27811840 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:28.518141+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 27811840 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:29.518248+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 27811840 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:30.518353+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 27811840 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:31.518451+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 27811840 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:32.518576+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:03:05 compute-0 ceph-osd[82261]: do_command 'config diff' '{prefix=config diff}'
Nov 25 10:03:05 compute-0 ceph-osd[82261]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 10:03:05 compute-0 ceph-osd[82261]: do_command 'config show' '{prefix=config show}'
Nov 25 10:03:05 compute-0 ceph-osd[82261]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:03:05 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 27451392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:03:05 compute-0 ceph-osd[82261]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 10:03:05 compute-0 ceph-osd[82261]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 10:03:05 compute-0 ceph-osd[82261]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 10:03:05 compute-0 ceph-osd[82261]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:33.518674+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 27762688 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:03:05 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:34.518780+0000)
Nov 25 10:03:05 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 27762688 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:03:05 compute-0 ceph-osd[82261]: do_command 'log dump' '{prefix=log dump}'
Nov 25 10:03:05 compute-0 ceph-mon[74207]: pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.17529 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4109658705' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3845725757' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2991243437' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.17547 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3003311665' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/290580060' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1703170186' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1688808163' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3862137652' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3310589035' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/771778548' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3093908409' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2822567927' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3119001524' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3238745345' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2123548578' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3730769304' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:03:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 25 10:03:05 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3807673395' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:03:05.391 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:03:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:03:05.391 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:03:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:03:05.392 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:03:05 compute-0 rsyslogd[961]: imjournal from <np0005534694:ceph-osd>: begin to drop messages due to rate-limiting
Nov 25 10:03:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 25 10:03:05 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4260874306' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:03:05 compute-0 crontab[274408]: (root) LIST (root)
Nov 25 10:03:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 25 10:03:05 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3348475619' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 25 10:03:06 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2422675866' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 nova_compute[253512]: 2025-11-25 10:03:06.197 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3807673395' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1416867171' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4267367237' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3265515761' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/513452612' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3947504297' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2940205370' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/20258841' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1025545117' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4260874306' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3097540885' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/608595569' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3348475619' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3744911263' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2422675866' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/782075080' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3746439262' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27413 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27376 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:06.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 25 10:03:06 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4064436364' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27443 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:06.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:06 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27403 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27409 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27464 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 25 10:03:06 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1261157585' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:06 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 25 10:03:06 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561018519' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27433 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:07.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:07.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:07.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:07.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:07 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27442 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 25 10:03:07 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193353403' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2516758813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.27413 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.27376 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/253986332' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4064436364' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2637886262' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3775875560' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.27443 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.27403 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.27409 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.27464 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1261157585' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2561018519' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/193353403' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17748 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17763 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27463 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 systemd[1]: Starting Hostname Service...
Nov 25 10:03:07 compute-0 systemd[1]: Started Hostname Service.
Nov 25 10:03:07 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17781 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17784 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27539 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27490 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:07 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27502 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:08 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27566 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:08 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17832 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.27433 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.27442 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.17748 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.27506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/700548910' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.17763 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.27463 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/287032064' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.17781 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.17784 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.27539 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2342319568' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.27490 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/522284379' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1806013605' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 25 10:03:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2607549409' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27526 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27587 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:08.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:08 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17862 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:08 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27559 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:08.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:08 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27623 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:08 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17898 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 25 10:03:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1515064773' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:03:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:08.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:08.862Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:08.862Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:08.863Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17925 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.27502 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.27566 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.17832 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2607549409' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.27526 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.27587 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2447917133' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2201478624' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.17862 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1669203688' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.27559 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.27623 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.17898 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1515064773' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3067395760' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3605452620' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27634 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27637 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 25 10:03:09 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1905069120' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27649 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:09 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:09 compute-0 nova_compute[253512]: 2025-11-25 10:03:09.827 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:10 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:10 compute-0 podman[275178]: 2025-11-25 10:03:10.057651242 +0000 UTC m=+0.118535209 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:03:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:03:10] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Nov 25 10:03:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:03:10] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Nov 25 10:03:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 25 10:03:10 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246662599' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='client.17925 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/471202242' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='client.27634 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='client.27637 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1905069120' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='client.27649 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2749113096' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3490083567' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:03:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2934849903' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:03:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:10.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:10 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.17985 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:10.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 25 10:03:10 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2850184845' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:03:11 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27776 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:11 compute-0 nova_compute[253512]: 2025-11-25 10:03:11.198 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:11 compute-0 ceph-mon[74207]: pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4246662599' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:03:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3270856375' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:03:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1728719757' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:03:11 compute-0 ceph-mon[74207]: from='client.17985 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1648111896' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:03:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3837461833' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:03:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2850184845' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:03:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3299312650' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:03:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 25 10:03:11 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3057636097' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:03:11 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27724 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 25 10:03:11 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392476456' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 25 10:03:12 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1806941140' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18057 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mon[74207]: from='client.27776 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3057636097' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mon[74207]: from='client.27724 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3037981209' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2392476456' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2060492933' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1561355588' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1806941140' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/73558721' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18063 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27769 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:12.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:12.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 25 10:03:12 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/79477528' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27854 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:12 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27790 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:13 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18090 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Nov 25 10:03:13 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4050823747' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27805 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mon[74207]: pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:13 compute-0 ceph-mon[74207]: from='client.18057 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mon[74207]: from='client.18063 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mon[74207]: from='client.27769 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2598971785' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1362293775' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/79477528' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mon[74207]: from='client.27854 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mon[74207]: from='client.27790 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4050823747' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18099 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Nov 25 10:03:13 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/74182607' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 10:03:13 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18126 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27917 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27850 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18144 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mon[74207]: from='client.18090 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mon[74207]: from='client.27805 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mon[74207]: from='client.18099 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/691021732' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1257740203' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/74182607' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2835888216' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1956004436' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27935 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:14.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27862 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Nov 25 10:03:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861715961' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 10:03:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:14.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Nov 25 10:03:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2128233638' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 10:03:14 compute-0 nova_compute[253512]: 2025-11-25 10:03:14.829 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Nov 25 10:03:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4168176874' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:03:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Nov 25 10:03:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1859111447' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:03:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18192 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18198 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='client.18126 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='client.27917 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='client.27850 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='client.18144 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='client.27935 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='client.27862 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1861715961' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1716749149' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2128233638' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4168176874' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1859111447' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1141962610' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27901 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27974 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27980 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27907 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Nov 25 10:03:15 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2714505728' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 10:03:15 compute-0 podman[276184]: 2025-11-25 10:03:15.995831902 +0000 UTC m=+0.056072674 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 10:03:16 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:16 compute-0 nova_compute[253512]: 2025-11-25 10:03:16.200 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:16 compute-0 ceph-mon[74207]: from='client.18192 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-0 ceph-mon[74207]: from='client.18198 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-0 ceph-mon[74207]: from='client.27901 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-0 ceph-mon[74207]: from='client.27974 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-0 ceph-mon[74207]: from='client.27980 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-0 ceph-mon[74207]: from='client.27907 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2714505728' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 10:03:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3646571918' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:03:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2300953125' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:03:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1812479581' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 10:03:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:16.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:16 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18252 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:16.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:16 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18267 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:16 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28028 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:17 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.27961 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:17.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:17.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:17.089Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:17.089Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 25 10:03:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2157215697' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:03:17 compute-0 ovs-appctl[277097]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 10:03:17 compute-0 ovs-appctl[277103]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 10:03:17 compute-0 ovs-appctl[277107]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 10:03:17 compute-0 ceph-mon[74207]: pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2628656410' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 10:03:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1702968464' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 10:03:17 compute-0 ceph-mon[74207]: from='client.18252 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3544174585' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2826370956' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:17 compute-0 ceph-mon[74207]: from='client.18267 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:03:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2157215697' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:03:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1306955108' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Nov 25 10:03:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1753420583' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 10:03:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Nov 25 10:03:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210099267' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:18 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28064 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-mon[74207]: from='client.28028 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-mon[74207]: from='client.27961 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1562817346' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1753420583' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1651478465' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/537900919' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1210099267' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/917369986' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2147542039' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2584142324' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:18.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Nov 25 10:03:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729700647' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28082 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:18.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:18 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28024 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:18.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:18.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:18.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:18.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Nov 25 10:03:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1122338819' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Nov 25 10:03:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3064872230' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mon[74207]: pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:19 compute-0 ceph-mon[74207]: from='client.28064 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3973466451' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1729700647' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mon[74207]: from='client.28082 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mon[74207]: from='client.28024 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1122338819' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1010117050' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3433541028' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3064872230' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1563001551' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28121 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Nov 25 10:03:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555726303' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28127 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:19 compute-0 nova_compute[253512]: 2025-11-25 10:03:19.831 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:19 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18381 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28145 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:03:20] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:03:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:03:20] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:03:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28063 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Nov 25 10:03:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1050656807' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/258153879' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 ceph-mon[74207]: from='client.28121 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1555726303' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 ceph-mon[74207]: from='client.28127 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/887379790' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3697817588' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1050656807' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:20.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28151 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28069 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:20.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Nov 25 10:03:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/820153740' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18429 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 nova_compute[253512]: 2025-11-25 10:03:21.200 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Nov 25 10:03:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727700563' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28190 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mon[74207]: from='client.18381 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mon[74207]: pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:21 compute-0 ceph-mon[74207]: from='client.28145 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mon[74207]: from='client.28063 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mon[74207]: from='client.28151 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mon[74207]: from='client.28069 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/820153740' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2139458383' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3958941425' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2197708573' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4072822949' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2727700563' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28105 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28196 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28114 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28120 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18468 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v965: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Nov 25 10:03:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/665351589' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: from='client.18429 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: from='client.28190 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: from='client.28105 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: from='client.28196 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: from='client.28114 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: from='client.28120 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: from='client.18468 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/733458927' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1370730776' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/665351589' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3691621473' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3438577591' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Nov 25 10:03:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1877302119' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:22.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:22 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28247 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28159 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:22.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:22 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18513 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28262 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:22 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28171 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18528 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Nov 25 10:03:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316437577' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 ceph-mon[74207]: pgmap v965: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1877302119' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 ceph-mon[74207]: from='client.28247 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 ceph-mon[74207]: from='client.28159 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 ceph-mon[74207]: from='client.18513 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 ceph-mon[74207]: from='client.28262 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 ceph-mon[74207]: from='client.28171 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2620515358' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2439157298' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3316437577' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 sudo[278705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:03:23 compute-0 sudo[278705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:23 compute-0 sudo[278705]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Nov 25 10:03:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3590049750' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:23 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18561 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:24 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v966: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:24 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18567 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:24 compute-0 ceph-mon[74207]: from='client.18528 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/253141219' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2815871477' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3590049750' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 10:03:24 compute-0 ceph-mon[74207]: from='client.18561 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:24.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:24.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:24 compute-0 nova_compute[253512]: 2025-11-25 10:03:24.832 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Nov 25 10:03:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2308878960' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:25 compute-0 virtqemud[252911]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 10:03:25 compute-0 ceph-mon[74207]: pgmap v966: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:25 compute-0 ceph-mon[74207]: from='client.18567 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:03:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/350362228' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2759359847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2308878960' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 10:03:26 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v967: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:26 compute-0 nova_compute[253512]: 2025-11-25 10:03:26.202 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:26 compute-0 systemd[1]: Starting Time & Date Service...
Nov 25 10:03:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2944657567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:26 compute-0 systemd[1]: Started Time & Date Service.
Nov 25 10:03:26 compute-0 nova_compute[253512]: 2025-11-25 10:03:26.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:26 compute-0 nova_compute[253512]: 2025-11-25 10:03:26.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:26.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:26.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:27.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:27.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:27.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:27.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:27 compute-0 ceph-mon[74207]: pgmap v967: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3206515216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:27 compute-0 nova_compute[253512]: 2025-11-25 10:03:27.470 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:27 compute-0 nova_compute[253512]: 2025-11-25 10:03:27.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:27 compute-0 nova_compute[253512]: 2025-11-25 10:03:27.508 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:03:27 compute-0 nova_compute[253512]: 2025-11-25 10:03:27.508 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:03:27 compute-0 nova_compute[253512]: 2025-11-25 10:03:27.508 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:03:27 compute-0 nova_compute[253512]: 2025-11-25 10:03:27.508 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:03:27 compute-0 nova_compute[253512]: 2025-11-25 10:03:27.509 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:03:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:03:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1101623836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:27 compute-0 nova_compute[253512]: 2025-11-25 10:03:27.845 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:03:28 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.051 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.052 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4432MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.052 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.052 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.233 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.234 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.340 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Refreshing inventories for resource provider d9873737-caae-40cc-9346-77a33537057c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 10:03:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1101623836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:28.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.482 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Updating ProviderTree inventory for provider d9873737-caae-40cc-9346-77a33537057c from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.483 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Updating inventory in ProviderTree for provider d9873737-caae-40cc-9346-77a33537057c with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.500 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Refreshing aggregate associations for resource provider d9873737-caae-40cc-9346-77a33537057c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.523 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Refreshing trait associations for resource provider d9873737-caae-40cc-9346-77a33537057c, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX512VPCLMULQDQ,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX512VAES,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.563 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:03:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:28.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:28.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:28.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:28.869Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:28.869Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:03:28 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1492594854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.908 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.913 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.935 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.936 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:03:28 compute-0 nova_compute[253512]: 2025-11-25 10:03:28.936 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:03:29 compute-0 ceph-mon[74207]: pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:29 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4009067469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:29 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1492594854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:03:29 compute-0 nova_compute[253512]: 2025-11-25 10:03:29.833 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:29 compute-0 nova_compute[253512]: 2025-11-25 10:03:29.937 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:29 compute-0 nova_compute[253512]: 2025-11-25 10:03:29.938 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:29 compute-0 nova_compute[253512]: 2025-11-25 10:03:29.938 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:29 compute-0 nova_compute[253512]: 2025-11-25 10:03:29.938 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:03:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:03:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:03:30 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:03:30] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Nov 25 10:03:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:03:30] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Nov 25 10:03:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:03:30 compute-0 nova_compute[253512]: 2025-11-25 10:03:30.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:30 compute-0 nova_compute[253512]: 2025-11-25 10:03:30.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:03:30 compute-0 nova_compute[253512]: 2025-11-25 10:03:30.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:03:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:30.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:30 compute-0 nova_compute[253512]: 2025-11-25 10:03:30.488 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:03:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:30.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:31 compute-0 nova_compute[253512]: 2025-11-25 10:03:31.205 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:31 compute-0 ceph-mon[74207]: pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:32 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:32 compute-0 sudo[279424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:03:32 compute-0 sudo[279424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:32 compute-0 sudo[279424]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:32 compute-0 podman[279448]: 2025-11-25 10:03:32.240534053 +0000 UTC m=+0.039436124 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 10:03:32 compute-0 sudo[279457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 25 10:03:32 compute-0 sudo[279457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:32 compute-0 rsyslogd[961]: imjournal: 1041 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 10:03:32 compute-0 nova_compute[253512]: 2025-11-25 10:03:32.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:32 compute-0 nova_compute[253512]: 2025-11-25 10:03:32.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 10:03:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:32.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:32 compute-0 nova_compute[253512]: 2025-11-25 10:03:32.484 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 10:03:32 compute-0 nova_compute[253512]: 2025-11-25 10:03:32.484 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 10:03:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 10:03:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:32 compute-0 sudo[279457]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:03:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:03:32 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:32 compute-0 sudo[279507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:03:32 compute-0 sudo[279507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:32 compute-0 sudo[279507]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:32 compute-0 sudo[279532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:03:32 compute-0 sudo[279532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:32.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:33 compute-0 sudo[279532]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:33 compute-0 sudo[279586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:03:33 compute-0 sudo[279586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:33 compute-0 sudo[279586]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:33 compute-0 sudo[279611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- inventory --format=json-pretty --filter-for-batch
Nov 25 10:03:33 compute-0 sudo[279611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:33 compute-0 podman[279667]: 2025-11-25 10:03:33.435717741 +0000 UTC m=+0.030337084 container create 17437305d4397b421308190bd2bb075b54d6e8cae2977700d76c9a1f1ab837ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_babbage, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:03:33 compute-0 systemd[1]: Started libpod-conmon-17437305d4397b421308190bd2bb075b54d6e8cae2977700d76c9a1f1ab837ab.scope.
Nov 25 10:03:33 compute-0 nova_compute[253512]: 2025-11-25 10:03:33.478 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:33 compute-0 nova_compute[253512]: 2025-11-25 10:03:33.479 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 10:03:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:03:33 compute-0 podman[279667]: 2025-11-25 10:03:33.490171421 +0000 UTC m=+0.084790765 container init 17437305d4397b421308190bd2bb075b54d6e8cae2977700d76c9a1f1ab837ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_babbage, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:03:33 compute-0 ceph-mon[74207]: pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:33 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:33 compute-0 podman[279667]: 2025-11-25 10:03:33.497013598 +0000 UTC m=+0.091632941 container start 17437305d4397b421308190bd2bb075b54d6e8cae2977700d76c9a1f1ab837ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 10:03:33 compute-0 podman[279667]: 2025-11-25 10:03:33.498052788 +0000 UTC m=+0.092672130 container attach 17437305d4397b421308190bd2bb075b54d6e8cae2977700d76c9a1f1ab837ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:03:33 compute-0 confident_babbage[279680]: 167 167
Nov 25 10:03:33 compute-0 systemd[1]: libpod-17437305d4397b421308190bd2bb075b54d6e8cae2977700d76c9a1f1ab837ab.scope: Deactivated successfully.
Nov 25 10:03:33 compute-0 podman[279667]: 2025-11-25 10:03:33.501284738 +0000 UTC m=+0.095904082 container died 17437305d4397b421308190bd2bb075b54d6e8cae2977700d76c9a1f1ab837ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:03:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6843a1d3c68713f35a6e15e01c994e4e1e6aa83ef9e7a59d75a8d390ae3f223-merged.mount: Deactivated successfully.
Nov 25 10:03:33 compute-0 podman[279667]: 2025-11-25 10:03:33.422175545 +0000 UTC m=+0.016794908 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:03:33 compute-0 podman[279667]: 2025-11-25 10:03:33.522279413 +0000 UTC m=+0.116898755 container remove 17437305d4397b421308190bd2bb075b54d6e8cae2977700d76c9a1f1ab837ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_babbage, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:03:33 compute-0 systemd[1]: libpod-conmon-17437305d4397b421308190bd2bb075b54d6e8cae2977700d76c9a1f1ab837ab.scope: Deactivated successfully.
Nov 25 10:03:33 compute-0 podman[279701]: 2025-11-25 10:03:33.651453472 +0000 UTC m=+0.035848431 container create 3af64720ca971f3b55cc561678f1ce2a2bbbb0cdabc0e721083da85a0afec856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:03:33 compute-0 systemd[1]: Started libpod-conmon-3af64720ca971f3b55cc561678f1ce2a2bbbb0cdabc0e721083da85a0afec856.scope.
Nov 25 10:03:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69268d7de2f44de906d30c79123e869c044fbe8599d532c3b478160d07fd0fa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69268d7de2f44de906d30c79123e869c044fbe8599d532c3b478160d07fd0fa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69268d7de2f44de906d30c79123e869c044fbe8599d532c3b478160d07fd0fa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69268d7de2f44de906d30c79123e869c044fbe8599d532c3b478160d07fd0fa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:33 compute-0 podman[279701]: 2025-11-25 10:03:33.722791929 +0000 UTC m=+0.107186888 container init 3af64720ca971f3b55cc561678f1ce2a2bbbb0cdabc0e721083da85a0afec856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_snyder, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:03:33 compute-0 podman[279701]: 2025-11-25 10:03:33.727706823 +0000 UTC m=+0.112101772 container start 3af64720ca971f3b55cc561678f1ce2a2bbbb0cdabc0e721083da85a0afec856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:03:33 compute-0 podman[279701]: 2025-11-25 10:03:33.729123323 +0000 UTC m=+0.113518292 container attach 3af64720ca971f3b55cc561678f1ce2a2bbbb0cdabc0e721083da85a0afec856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_snyder, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:03:33 compute-0 podman[279701]: 2025-11-25 10:03:33.633006422 +0000 UTC m=+0.017401391 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:34 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:34 compute-0 angry_snyder[279714]: [
Nov 25 10:03:34 compute-0 angry_snyder[279714]:     {
Nov 25 10:03:34 compute-0 angry_snyder[279714]:         "available": false,
Nov 25 10:03:34 compute-0 angry_snyder[279714]:         "being_replaced": false,
Nov 25 10:03:34 compute-0 angry_snyder[279714]:         "ceph_device_lvm": false,
Nov 25 10:03:34 compute-0 angry_snyder[279714]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:         "lsm_data": {},
Nov 25 10:03:34 compute-0 angry_snyder[279714]:         "lvs": [],
Nov 25 10:03:34 compute-0 angry_snyder[279714]:         "path": "/dev/sr0",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:         "rejected_reasons": [
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "Has a FileSystem",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "Insufficient space (<5GB)"
Nov 25 10:03:34 compute-0 angry_snyder[279714]:         ],
Nov 25 10:03:34 compute-0 angry_snyder[279714]:         "sys_api": {
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "actuators": null,
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "device_nodes": [
Nov 25 10:03:34 compute-0 angry_snyder[279714]:                 "sr0"
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             ],
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "devname": "sr0",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "human_readable_size": "474.00 KB",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "id_bus": "ata",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "model": "QEMU DVD-ROM",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "nr_requests": "64",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "parent": "/dev/sr0",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "partitions": {},
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "path": "/dev/sr0",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "removable": "1",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "rev": "2.5+",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "ro": "0",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "rotational": "1",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "sas_address": "",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "sas_device_handle": "",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "scheduler_mode": "mq-deadline",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "sectors": 0,
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "sectorsize": "2048",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "size": 485376.0,
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "support_discard": "2048",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "type": "disk",
Nov 25 10:03:34 compute-0 angry_snyder[279714]:             "vendor": "QEMU"
Nov 25 10:03:34 compute-0 angry_snyder[279714]:         }
Nov 25 10:03:34 compute-0 angry_snyder[279714]:     }
Nov 25 10:03:34 compute-0 angry_snyder[279714]: ]
Nov 25 10:03:34 compute-0 systemd[1]: libpod-3af64720ca971f3b55cc561678f1ce2a2bbbb0cdabc0e721083da85a0afec856.scope: Deactivated successfully.
Nov 25 10:03:34 compute-0 podman[280914]: 2025-11-25 10:03:34.304233712 +0000 UTC m=+0.019577393 container died 3af64720ca971f3b55cc561678f1ce2a2bbbb0cdabc0e721083da85a0afec856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Nov 25 10:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-69268d7de2f44de906d30c79123e869c044fbe8599d532c3b478160d07fd0fa2-merged.mount: Deactivated successfully.
Nov 25 10:03:34 compute-0 podman[280914]: 2025-11-25 10:03:34.326443124 +0000 UTC m=+0.041786786 container remove 3af64720ca971f3b55cc561678f1ce2a2bbbb0cdabc0e721083da85a0afec856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_snyder, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 10:03:34 compute-0 systemd[1]: libpod-conmon-3af64720ca971f3b55cc561678f1ce2a2bbbb0cdabc0e721083da85a0afec856.scope: Deactivated successfully.
Nov 25 10:03:34 compute-0 sudo[279611]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:03:34 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:03:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:03:34 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:03:34 compute-0 sudo[280926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:03:34 compute-0 sudo[280926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:34 compute-0 sudo[280926]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:34 compute-0 sudo[280951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 10:03:34 compute-0 sudo[280951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:34.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:34.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:34 compute-0 podman[281007]: 2025-11-25 10:03:34.781311989 +0000 UTC m=+0.030722300 container create a131c167331f38c9129493ad3c9cb0ed1a4c8d617db730536ee4a39f5a71f64d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_bhabha, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:03:34 compute-0 systemd[1]: Started libpod-conmon-a131c167331f38c9129493ad3c9cb0ed1a4c8d617db730536ee4a39f5a71f64d.scope.
Nov 25 10:03:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:03:34 compute-0 podman[281007]: 2025-11-25 10:03:34.83440775 +0000 UTC m=+0.083818061 container init a131c167331f38c9129493ad3c9cb0ed1a4c8d617db730536ee4a39f5a71f64d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_bhabha, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:03:34 compute-0 nova_compute[253512]: 2025-11-25 10:03:34.833 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:34 compute-0 podman[281007]: 2025-11-25 10:03:34.839357289 +0000 UTC m=+0.088767599 container start a131c167331f38c9129493ad3c9cb0ed1a4c8d617db730536ee4a39f5a71f64d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_bhabha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:03:34 compute-0 podman[281007]: 2025-11-25 10:03:34.841363481 +0000 UTC m=+0.090773791 container attach a131c167331f38c9129493ad3c9cb0ed1a4c8d617db730536ee4a39f5a71f64d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_bhabha, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 10:03:34 compute-0 naughty_bhabha[281020]: 167 167
Nov 25 10:03:34 compute-0 podman[281007]: 2025-11-25 10:03:34.845825672 +0000 UTC m=+0.095235982 container died a131c167331f38c9129493ad3c9cb0ed1a4c8d617db730536ee4a39f5a71f64d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_bhabha, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 25 10:03:34 compute-0 systemd[1]: libpod-a131c167331f38c9129493ad3c9cb0ed1a4c8d617db730536ee4a39f5a71f64d.scope: Deactivated successfully.
Nov 25 10:03:34 compute-0 podman[281007]: 2025-11-25 10:03:34.766571595 +0000 UTC m=+0.015981925 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2441f6cb763e2f7f63bc17d5a0884b666ea54cbfb27219bc89f80ca25da86bd6-merged.mount: Deactivated successfully.
Nov 25 10:03:34 compute-0 podman[281007]: 2025-11-25 10:03:34.877867989 +0000 UTC m=+0.127278299 container remove a131c167331f38c9129493ad3c9cb0ed1a4c8d617db730536ee4a39f5a71f64d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:03:34 compute-0 systemd[1]: libpod-conmon-a131c167331f38c9129493ad3c9cb0ed1a4c8d617db730536ee4a39f5a71f64d.scope: Deactivated successfully.
Nov 25 10:03:35 compute-0 podman[281042]: 2025-11-25 10:03:35.004424345 +0000 UTC m=+0.030384854 container create 692437bf9f2ac6cbf9a28492ff7ecd9424b6dea87d6ccef64bc25443f5067eb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_blackburn, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-0 ceph-mon[74207]: pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:03:35 compute-0 ceph-mon[74207]: pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:03:35 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:03:35 compute-0 systemd[1]: Started libpod-conmon-692437bf9f2ac6cbf9a28492ff7ecd9424b6dea87d6ccef64bc25443f5067eb8.scope.
Nov 25 10:03:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43379c2f036894be3ae92ae41371a9017a8d3bce22ae836696dc81f315225a1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43379c2f036894be3ae92ae41371a9017a8d3bce22ae836696dc81f315225a1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43379c2f036894be3ae92ae41371a9017a8d3bce22ae836696dc81f315225a1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43379c2f036894be3ae92ae41371a9017a8d3bce22ae836696dc81f315225a1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43379c2f036894be3ae92ae41371a9017a8d3bce22ae836696dc81f315225a1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:35 compute-0 podman[281042]: 2025-11-25 10:03:35.063849666 +0000 UTC m=+0.089810174 container init 692437bf9f2ac6cbf9a28492ff7ecd9424b6dea87d6ccef64bc25443f5067eb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_blackburn, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Nov 25 10:03:35 compute-0 podman[281042]: 2025-11-25 10:03:35.069085085 +0000 UTC m=+0.095045593 container start 692437bf9f2ac6cbf9a28492ff7ecd9424b6dea87d6ccef64bc25443f5067eb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:03:35 compute-0 podman[281042]: 2025-11-25 10:03:35.070738611 +0000 UTC m=+0.096699139 container attach 692437bf9f2ac6cbf9a28492ff7ecd9424b6dea87d6ccef64bc25443f5067eb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:03:35 compute-0 podman[281042]: 2025-11-25 10:03:34.991935736 +0000 UTC m=+0.017896264 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:03:35 compute-0 eager_blackburn[281055]: --> passed data devices: 0 physical, 1 LVM
Nov 25 10:03:35 compute-0 eager_blackburn[281055]: --> All data devices are unavailable
Nov 25 10:03:35 compute-0 systemd[1]: libpod-692437bf9f2ac6cbf9a28492ff7ecd9424b6dea87d6ccef64bc25443f5067eb8.scope: Deactivated successfully.
Nov 25 10:03:35 compute-0 podman[281042]: 2025-11-25 10:03:35.336656021 +0000 UTC m=+0.362616529 container died 692437bf9f2ac6cbf9a28492ff7ecd9424b6dea87d6ccef64bc25443f5067eb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_blackburn, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 10:03:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-43379c2f036894be3ae92ae41371a9017a8d3bce22ae836696dc81f315225a1b-merged.mount: Deactivated successfully.
Nov 25 10:03:35 compute-0 podman[281042]: 2025-11-25 10:03:35.360798477 +0000 UTC m=+0.386758985 container remove 692437bf9f2ac6cbf9a28492ff7ecd9424b6dea87d6ccef64bc25443f5067eb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_blackburn, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 10:03:35 compute-0 systemd[1]: libpod-conmon-692437bf9f2ac6cbf9a28492ff7ecd9424b6dea87d6ccef64bc25443f5067eb8.scope: Deactivated successfully.
Nov 25 10:03:35 compute-0 sudo[280951]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:35 compute-0 sudo[281080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:03:35 compute-0 sudo[281080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:35 compute-0 sudo[281080]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:35 compute-0 sudo[281105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 10:03:35 compute-0 sudo[281105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:35 compute-0 nova_compute[253512]: 2025-11-25 10:03:35.499 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:03:35 compute-0 podman[281161]: 2025-11-25 10:03:35.783448051 +0000 UTC m=+0.028764398 container create b2f6f352aa22ff3ce6d523b4bab26cff44d03a5ffe45b47e6dfb06a999c04061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_wu, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:03:35 compute-0 systemd[1]: Started libpod-conmon-b2f6f352aa22ff3ce6d523b4bab26cff44d03a5ffe45b47e6dfb06a999c04061.scope.
Nov 25 10:03:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:03:35 compute-0 podman[281161]: 2025-11-25 10:03:35.830914592 +0000 UTC m=+0.076230959 container init b2f6f352aa22ff3ce6d523b4bab26cff44d03a5ffe45b47e6dfb06a999c04061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_wu, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 25 10:03:35 compute-0 podman[281161]: 2025-11-25 10:03:35.835507569 +0000 UTC m=+0.080823916 container start b2f6f352aa22ff3ce6d523b4bab26cff44d03a5ffe45b47e6dfb06a999c04061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_wu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:03:35 compute-0 podman[281161]: 2025-11-25 10:03:35.836767684 +0000 UTC m=+0.082084031 container attach b2f6f352aa22ff3ce6d523b4bab26cff44d03a5ffe45b47e6dfb06a999c04061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_wu, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:03:35 compute-0 nostalgic_wu[281174]: 167 167
Nov 25 10:03:35 compute-0 systemd[1]: libpod-b2f6f352aa22ff3ce6d523b4bab26cff44d03a5ffe45b47e6dfb06a999c04061.scope: Deactivated successfully.
Nov 25 10:03:35 compute-0 podman[281161]: 2025-11-25 10:03:35.839963618 +0000 UTC m=+0.085279966 container died b2f6f352aa22ff3ce6d523b4bab26cff44d03a5ffe45b47e6dfb06a999c04061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_wu, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:03:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dc94e3ede203172fadaf4a18718ec2e1d68cc2513f3157e59acc38e4e0572c1-merged.mount: Deactivated successfully.
Nov 25 10:03:35 compute-0 podman[281161]: 2025-11-25 10:03:35.862232853 +0000 UTC m=+0.107549201 container remove b2f6f352aa22ff3ce6d523b4bab26cff44d03a5ffe45b47e6dfb06a999c04061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_wu, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 10:03:35 compute-0 podman[281161]: 2025-11-25 10:03:35.771800758 +0000 UTC m=+0.017117125 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:03:35 compute-0 systemd[1]: libpod-conmon-b2f6f352aa22ff3ce6d523b4bab26cff44d03a5ffe45b47e6dfb06a999c04061.scope: Deactivated successfully.
Nov 25 10:03:36 compute-0 podman[281197]: 2025-11-25 10:03:35.973256954 +0000 UTC m=+0.018252156 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:03:36 compute-0 nova_compute[253512]: 2025-11-25 10:03:36.206 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:36 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v973: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 889 B/s rd, 0 op/s
Nov 25 10:03:36 compute-0 podman[281197]: 2025-11-25 10:03:36.392575329 +0000 UTC m=+0.437570510 container create 6f6cf84fdba0bfdd40da35158bf29125b8b3aab39a2032202de1fdb9ab8b13a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 10:03:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:03:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:36.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:03:36 compute-0 systemd[1]: Started libpod-conmon-6f6cf84fdba0bfdd40da35158bf29125b8b3aab39a2032202de1fdb9ab8b13a9.scope.
Nov 25 10:03:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7e2e5c21fbe142e89a3e0d36c4b52c1e43e5a03239983245e5cf542d50cf1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7e2e5c21fbe142e89a3e0d36c4b52c1e43e5a03239983245e5cf542d50cf1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7e2e5c21fbe142e89a3e0d36c4b52c1e43e5a03239983245e5cf542d50cf1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7e2e5c21fbe142e89a3e0d36c4b52c1e43e5a03239983245e5cf542d50cf1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:36 compute-0 podman[281197]: 2025-11-25 10:03:36.657664777 +0000 UTC m=+0.702659969 container init 6f6cf84fdba0bfdd40da35158bf29125b8b3aab39a2032202de1fdb9ab8b13a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_goldstine, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 10:03:36 compute-0 podman[281197]: 2025-11-25 10:03:36.662676714 +0000 UTC m=+0.707671896 container start 6f6cf84fdba0bfdd40da35158bf29125b8b3aab39a2032202de1fdb9ab8b13a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_goldstine, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:03:36 compute-0 podman[281197]: 2025-11-25 10:03:36.664551379 +0000 UTC m=+0.709546560 container attach 6f6cf84fdba0bfdd40da35158bf29125b8b3aab39a2032202de1fdb9ab8b13a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:03:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:36.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]: {
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:     "1": [
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:         {
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             "devices": [
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "/dev/loop3"
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             ],
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             "lv_name": "ceph_lv0",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             "lv_size": "21470642176",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             "name": "ceph_lv0",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             "tags": {
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.cluster_name": "ceph",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.crush_device_class": "",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.encrypted": "0",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.osd_id": "1",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.type": "block",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.vdo": "0",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:                 "ceph.with_tpm": "0"
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             },
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             "type": "block",
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:             "vg_name": "ceph_vg0"
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:         }
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]:     ]
Nov 25 10:03:36 compute-0 fervent_goldstine[281211]: }
Nov 25 10:03:36 compute-0 systemd[1]: libpod-6f6cf84fdba0bfdd40da35158bf29125b8b3aab39a2032202de1fdb9ab8b13a9.scope: Deactivated successfully.
Nov 25 10:03:36 compute-0 conmon[281211]: conmon 6f6cf84fdba0bfdd40da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f6cf84fdba0bfdd40da35158bf29125b8b3aab39a2032202de1fdb9ab8b13a9.scope/container/memory.events
Nov 25 10:03:36 compute-0 podman[281220]: 2025-11-25 10:03:36.924055098 +0000 UTC m=+0.016714365 container died 6f6cf84fdba0bfdd40da35158bf29125b8b3aab39a2032202de1fdb9ab8b13a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_goldstine, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:03:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a7e2e5c21fbe142e89a3e0d36c4b52c1e43e5a03239983245e5cf542d50cf1b-merged.mount: Deactivated successfully.
Nov 25 10:03:36 compute-0 podman[281220]: 2025-11-25 10:03:36.945287669 +0000 UTC m=+0.037946927 container remove 6f6cf84fdba0bfdd40da35158bf29125b8b3aab39a2032202de1fdb9ab8b13a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_goldstine, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:03:36 compute-0 systemd[1]: libpod-conmon-6f6cf84fdba0bfdd40da35158bf29125b8b3aab39a2032202de1fdb9ab8b13a9.scope: Deactivated successfully.
Nov 25 10:03:36 compute-0 sudo[281105]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:37 compute-0 sudo[281232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:03:37 compute-0 sudo[281232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:37 compute-0 sudo[281232]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:37 compute-0 sudo[281257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 10:03:37 compute-0 sudo[281257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:37.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:37.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:37.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:37.086Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:37 compute-0 podman[281314]: 2025-11-25 10:03:37.377739 +0000 UTC m=+0.028967213 container create 1b7b9a2fa8225a3d266e188e14775091efa586cd31615c50a236a73763d03e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ride, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 10:03:37 compute-0 systemd[1]: Started libpod-conmon-1b7b9a2fa8225a3d266e188e14775091efa586cd31615c50a236a73763d03e5b.scope.
Nov 25 10:03:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:03:37 compute-0 podman[281314]: 2025-11-25 10:03:37.424640815 +0000 UTC m=+0.075869029 container init 1b7b9a2fa8225a3d266e188e14775091efa586cd31615c50a236a73763d03e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ride, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 10:03:37 compute-0 podman[281314]: 2025-11-25 10:03:37.429886432 +0000 UTC m=+0.081114645 container start 1b7b9a2fa8225a3d266e188e14775091efa586cd31615c50a236a73763d03e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ride, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:03:37 compute-0 podman[281314]: 2025-11-25 10:03:37.432406513 +0000 UTC m=+0.083634746 container attach 1b7b9a2fa8225a3d266e188e14775091efa586cd31615c50a236a73763d03e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ride, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:03:37 compute-0 bold_ride[281327]: 167 167
Nov 25 10:03:37 compute-0 systemd[1]: libpod-1b7b9a2fa8225a3d266e188e14775091efa586cd31615c50a236a73763d03e5b.scope: Deactivated successfully.
Nov 25 10:03:37 compute-0 conmon[281327]: conmon 1b7b9a2fa8225a3d266e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b7b9a2fa8225a3d266e188e14775091efa586cd31615c50a236a73763d03e5b.scope/container/memory.events
Nov 25 10:03:37 compute-0 podman[281314]: 2025-11-25 10:03:37.434451066 +0000 UTC m=+0.085679280 container died 1b7b9a2fa8225a3d266e188e14775091efa586cd31615c50a236a73763d03e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:03:37 compute-0 ceph-mon[74207]: pgmap v973: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 889 B/s rd, 0 op/s
Nov 25 10:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2964b046eb8c3bbbc9dd0eee398ae5fb467248ccfb4b1e830903fc3c599af89-merged.mount: Deactivated successfully.
Nov 25 10:03:37 compute-0 podman[281314]: 2025-11-25 10:03:37.455601033 +0000 UTC m=+0.106829245 container remove 1b7b9a2fa8225a3d266e188e14775091efa586cd31615c50a236a73763d03e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 10:03:37 compute-0 podman[281314]: 2025-11-25 10:03:37.365783153 +0000 UTC m=+0.017011367 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:03:37 compute-0 systemd[1]: libpod-conmon-1b7b9a2fa8225a3d266e188e14775091efa586cd31615c50a236a73763d03e5b.scope: Deactivated successfully.
Nov 25 10:03:37 compute-0 podman[281350]: 2025-11-25 10:03:37.583607473 +0000 UTC m=+0.028224631 container create 685c68956add0d5dd1a19d8084249bec4ca2797bd3784693ee45413b3b1640e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_dubinsky, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:03:37 compute-0 systemd[1]: Started libpod-conmon-685c68956add0d5dd1a19d8084249bec4ca2797bd3784693ee45413b3b1640e3.scope.
Nov 25 10:03:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3aaef6e0794d8ab08280afe5fe45ca8a9a1ec1e9a64957b45a188282cd087d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3aaef6e0794d8ab08280afe5fe45ca8a9a1ec1e9a64957b45a188282cd087d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3aaef6e0794d8ab08280afe5fe45ca8a9a1ec1e9a64957b45a188282cd087d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3aaef6e0794d8ab08280afe5fe45ca8a9a1ec1e9a64957b45a188282cd087d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:03:37 compute-0 podman[281350]: 2025-11-25 10:03:37.651159382 +0000 UTC m=+0.095776551 container init 685c68956add0d5dd1a19d8084249bec4ca2797bd3784693ee45413b3b1640e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_dubinsky, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 10:03:37 compute-0 podman[281350]: 2025-11-25 10:03:37.656884172 +0000 UTC m=+0.101501331 container start 685c68956add0d5dd1a19d8084249bec4ca2797bd3784693ee45413b3b1640e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:03:37 compute-0 podman[281350]: 2025-11-25 10:03:37.665304563 +0000 UTC m=+0.109921712 container attach 685c68956add0d5dd1a19d8084249bec4ca2797bd3784693ee45413b3b1640e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_dubinsky, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:03:37 compute-0 podman[281350]: 2025-11-25 10:03:37.572299247 +0000 UTC m=+0.016916426 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:03:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.038686) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018038739, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1555, "num_deletes": 250, "total_data_size": 2343467, "memory_usage": 2386016, "flush_reason": "Manual Compaction"}
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018045354, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2264580, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26880, "largest_seqno": 28434, "table_properties": {"data_size": 2256608, "index_size": 4402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 21290, "raw_average_key_size": 21, "raw_value_size": 2238936, "raw_average_value_size": 2270, "num_data_blocks": 190, "num_entries": 986, "num_filter_entries": 986, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064931, "oldest_key_time": 1764064931, "file_creation_time": 1764065018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 6694 microseconds, and 4160 cpu microseconds.
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.045384) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2264580 bytes OK
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.045397) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.046030) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.046043) EVENT_LOG_v1 {"time_micros": 1764065018046039, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.046055) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2335596, prev total WAL file size 2335596, number of live WAL files 2.
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.046740) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2211KB)], [59(13MB)]
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018046766, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 16170440, "oldest_snapshot_seqno": -1}
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6319 keys, 14907789 bytes, temperature: kUnknown
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018076703, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 14907789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14864607, "index_size": 26304, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 162148, "raw_average_key_size": 25, "raw_value_size": 14749899, "raw_average_value_size": 2334, "num_data_blocks": 1060, "num_entries": 6319, "num_filter_entries": 6319, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764065018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.076996) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 14907789 bytes
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.081876) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 537.0 rd, 495.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 13.3 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(13.7) write-amplify(6.6) OK, records in: 6837, records dropped: 518 output_compression: NoCompression
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.081907) EVENT_LOG_v1 {"time_micros": 1764065018081886, "job": 32, "event": "compaction_finished", "compaction_time_micros": 30112, "compaction_time_cpu_micros": 21775, "output_level": 6, "num_output_files": 1, "total_output_size": 14907789, "num_input_records": 6837, "num_output_records": 6319, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018082335, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018084073, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.046697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.084098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.084101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.084103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.084104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:03:38 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:03:38.084105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:03:38 compute-0 clever_dubinsky[281364]: {}
Nov 25 10:03:38 compute-0 lvm[281443]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:03:38 compute-0 lvm[281443]: VG ceph_vg0 finished
Nov 25 10:03:38 compute-0 podman[281350]: 2025-11-25 10:03:38.214541293 +0000 UTC m=+0.659158452 container died 685c68956add0d5dd1a19d8084249bec4ca2797bd3784693ee45413b3b1640e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_dubinsky, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:03:38 compute-0 systemd[1]: libpod-685c68956add0d5dd1a19d8084249bec4ca2797bd3784693ee45413b3b1640e3.scope: Deactivated successfully.
Nov 25 10:03:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f3aaef6e0794d8ab08280afe5fe45ca8a9a1ec1e9a64957b45a188282cd087d-merged.mount: Deactivated successfully.
Nov 25 10:03:38 compute-0 podman[281350]: 2025-11-25 10:03:38.242805939 +0000 UTC m=+0.687423098 container remove 685c68956add0d5dd1a19d8084249bec4ca2797bd3784693ee45413b3b1640e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:03:38 compute-0 systemd[1]: libpod-conmon-685c68956add0d5dd1a19d8084249bec4ca2797bd3784693ee45413b3b1640e3.scope: Deactivated successfully.
Nov 25 10:03:38 compute-0 sudo[281257]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:03:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:03:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:38 compute-0 sudo[281454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:03:38 compute-0 sudo[281454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:38 compute-0 sudo[281454]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:38 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v974: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Nov 25 10:03:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:38.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:38.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:38.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:38.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:38.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:38.867Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:03:39 compute-0 ceph-mon[74207]: pgmap v974: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Nov 25 10:03:39 compute-0 nova_compute[253512]: 2025-11-25 10:03:39.835 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:03:40] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Nov 25 10:03:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:03:40] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Nov 25 10:03:40 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v975: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Nov 25 10:03:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:40.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:40.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:41 compute-0 podman[281481]: 2025-11-25 10:03:41.006750875 +0000 UTC m=+0.065312500 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 10:03:41 compute-0 nova_compute[253512]: 2025-11-25 10:03:41.208 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:41 compute-0 ceph-mon[74207]: pgmap v975: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Nov 25 10:03:42 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v976: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 0 B/s wr, 207 op/s
Nov 25 10:03:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:03:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:42.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:03:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:42.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:43 compute-0 ceph-mon[74207]: pgmap v976: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 0 B/s wr, 207 op/s
Nov 25 10:03:43 compute-0 sudo[281507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:03:43 compute-0 sudo[281507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:03:43 compute-0 sudo[281507]: pam_unix(sudo:session): session closed for user root
Nov 25 10:03:44 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v977: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 0 B/s wr, 207 op/s
Nov 25 10:03:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:44.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:44.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:44 compute-0 nova_compute[253512]: 2025-11-25 10:03:44.836 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_10:03:44
Nov 25 10:03:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:03:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 10:03:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.nfs', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr']
Nov 25 10:03:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 10:03:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:03:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:03:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:03:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:03:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:03:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:03:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:03:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:03:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:03:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:03:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:03:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:03:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:03:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:03:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:03:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:03:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:03:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:03:45 compute-0 ceph-mon[74207]: pgmap v977: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 0 B/s wr, 207 op/s
Nov 25 10:03:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:03:46 compute-0 nova_compute[253512]: 2025-11-25 10:03:46.209 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:46 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v978: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 10:03:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:03:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:46.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:03:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:46.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:46 compute-0 podman[281536]: 2025-11-25 10:03:46.979218725 +0000 UTC m=+0.038765668 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:03:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:47.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:47.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:47.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:47.084Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:47 compute-0 ceph-mon[74207]: pgmap v978: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 10:03:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:48 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v979: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 10:03:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:03:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:48.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:03:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:48.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:48.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:48.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:48.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:48.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:49 compute-0 ceph-mon[74207]: pgmap v979: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 10:03:49 compute-0 nova_compute[253512]: 2025-11-25 10:03:49.839 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:03:50] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:03:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:03:50] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:03:50 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v980: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 10:03:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:50.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:50.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:51 compute-0 nova_compute[253512]: 2025-11-25 10:03:51.210 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:51 compute-0 ceph-mon[74207]: pgmap v980: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 10:03:52 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 10:03:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:52.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:52.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:53 compute-0 ceph-mon[74207]: pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s
Nov 25 10:03:54 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:03:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:54.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:03:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1838669055' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:03:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1838669055' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:03:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:54.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:54 compute-0 nova_compute[253512]: 2025-11-25 10:03:54.839 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:03:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:03:55 compute-0 ceph-mon[74207]: pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:03:56 compute-0 nova_compute[253512]: 2025-11-25 10:03:56.212 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:56 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:03:56 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 10:03:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:03:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:56.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:03:56 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 10:03:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:56.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:57.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:57.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:57.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:57.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:57 compute-0 ceph-mon[74207]: pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:03:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:03:58 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:58.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:03:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:03:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:58.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:03:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:58.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:58.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:58.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:03:58.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:03:59 compute-0 ceph-mon[74207]: pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:03:59 compute-0 nova_compute[253512]: 2025-11-25 10:03:59.842 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:03:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:03:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:04:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:04:00] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:04:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:04:00] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:04:00 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:04:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:00.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:04:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:04:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:00.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:01 compute-0 nova_compute[253512]: 2025-11-25 10:04:01.212 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:01 compute-0 ceph-mon[74207]: pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:02 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:04:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:04:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:02.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:04:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:04:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:02.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:04:02 compute-0 podman[281573]: 2025-11-25 10:04:02.981368717 +0000 UTC m=+0.036200016 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:04:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:03 compute-0 sudo[281589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:04:03 compute-0 sudo[281589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:03 compute-0 sudo[281589]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:03 compute-0 ceph-mon[74207]: pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:04:04 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:04:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:04.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:04:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:04.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:04 compute-0 nova_compute[253512]: 2025-11-25 10:04:04.844 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:04:05.392 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:04:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:04:05.392 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:04:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:04:05.392 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:04:05 compute-0 ceph-mon[74207]: pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:06 compute-0 nova_compute[253512]: 2025-11-25 10:04:06.214 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:06 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:04:06 compute-0 sudo[271982]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:06 compute-0 sshd-session[271981]: Received disconnect from 192.168.122.10 port 35658:11: disconnected by user
Nov 25 10:04:06 compute-0 sshd-session[271981]: Disconnected from user zuul 192.168.122.10 port 35658
Nov 25 10:04:06 compute-0 sshd-session[271978]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:04:06 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Nov 25 10:04:06 compute-0 systemd[1]: session-56.scope: Consumed 2min 11.926s CPU time, 895.1M memory peak, read 384.3M from disk, written 79.0M to disk.
Nov 25 10:04:06 compute-0 systemd-logind[744]: Session 56 logged out. Waiting for processes to exit.
Nov 25 10:04:06 compute-0 systemd-logind[744]: Removed session 56.
Nov 25 10:04:06 compute-0 sshd-session[281618]: Accepted publickey for zuul from 192.168.122.10 port 47692 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 10:04:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:04:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:06.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:04:06 compute-0 systemd-logind[744]: New session 57 of user zuul.
Nov 25 10:04:06 compute-0 systemd[1]: Started Session 57 of User zuul.
Nov 25 10:04:06 compute-0 sshd-session[281618]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:04:06 compute-0 sudo[281622]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-11-25-ugbvhyu.tar.xz
Nov 25 10:04:06 compute-0 sudo[281622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:06 compute-0 sudo[281622]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:06 compute-0 sshd-session[281621]: Received disconnect from 192.168.122.10 port 47692:11: disconnected by user
Nov 25 10:04:06 compute-0 sshd-session[281621]: Disconnected from user zuul 192.168.122.10 port 47692
Nov 25 10:04:06 compute-0 sshd-session[281618]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:04:06 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Nov 25 10:04:06 compute-0 systemd-logind[744]: Session 57 logged out. Waiting for processes to exit.
Nov 25 10:04:06 compute-0 systemd-logind[744]: Removed session 57.
Nov 25 10:04:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:06.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:06 compute-0 sshd-session[281647]: Accepted publickey for zuul from 192.168.122.10 port 47694 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 10:04:06 compute-0 systemd-logind[744]: New session 58 of user zuul.
Nov 25 10:04:06 compute-0 systemd[1]: Started Session 58 of User zuul.
Nov 25 10:04:06 compute-0 sshd-session[281647]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:04:06 compute-0 sudo[281651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Nov 25 10:04:06 compute-0 sudo[281651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:06 compute-0 sudo[281651]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:06 compute-0 sshd-session[281650]: Received disconnect from 192.168.122.10 port 47694:11: disconnected by user
Nov 25 10:04:06 compute-0 sshd-session[281650]: Disconnected from user zuul 192.168.122.10 port 47694
Nov 25 10:04:06 compute-0 sshd-session[281647]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:04:06 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Nov 25 10:04:06 compute-0 systemd-logind[744]: Session 58 logged out. Waiting for processes to exit.
Nov 25 10:04:06 compute-0 systemd-logind[744]: Removed session 58.
Nov 25 10:04:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:07.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:07.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:07.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:07.084Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:07 compute-0 ceph-mon[74207]: pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:04:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:08 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:08.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:08.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:08.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:08.928Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:08.928Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:08.929Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:09 compute-0 ceph-mon[74207]: pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:09 compute-0 nova_compute[253512]: 2025-11-25 10:04:09.848 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:04:10] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:04:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:04:10] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:04:10 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:04:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:10.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:04:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:10.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:10 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:04:10 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:04:11 compute-0 nova_compute[253512]: 2025-11-25 10:04:11.214 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:11 compute-0 ceph-mon[74207]: pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:11 compute-0 podman[281682]: 2025-11-25 10:04:11.991014306 +0000 UTC m=+0.054182167 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:04:12 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:12.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:12.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:13 compute-0 ceph-mon[74207]: pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:14 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:14.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:14.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:14 compute-0 nova_compute[253512]: 2025-11-25 10:04:14.849 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:04:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:04:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:04:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:04:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:04:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:04:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:04:14 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:04:15 compute-0 ceph-mon[74207]: pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:04:16 compute-0 nova_compute[253512]: 2025-11-25 10:04:16.215 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:16 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v993: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:16.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:16.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:17.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:17.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:17.102Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:17.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:17 compute-0 ceph-mon[74207]: pgmap v993: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:17 compute-0 podman[281712]: 2025-11-25 10:04:17.979562847 +0000 UTC m=+0.043512870 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 10:04:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:18 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v994: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:18.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:18.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:18.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:18.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:18.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:18.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:19 compute-0 ceph-mon[74207]: pgmap v994: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:19 compute-0 nova_compute[253512]: 2025-11-25 10:04:19.850 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:04:20] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:04:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:04:20] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:04:20 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v995: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:04:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:20.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:04:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:20.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:21 compute-0 nova_compute[253512]: 2025-11-25 10:04:21.216 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:21 compute-0 ceph-mon[74207]: pgmap v995: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:22 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v996: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:04:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:22.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:04:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:22.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:23 compute-0 sudo[281734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:04:23 compute-0 sudo[281734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:23 compute-0 sudo[281734]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:23 compute-0 ceph-mon[74207]: pgmap v996: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:24 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v997: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:24.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:24.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:24 compute-0 nova_compute[253512]: 2025-11-25 10:04:24.853 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:25 compute-0 ceph-mon[74207]: pgmap v997: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:26 compute-0 nova_compute[253512]: 2025-11-25 10:04:26.218 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:26 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v998: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:26 compute-0 nova_compute[253512]: 2025-11-25 10:04:26.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:26.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:27.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:27.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:27.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:27.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:27 compute-0 nova_compute[253512]: 2025-11-25 10:04:27.470 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:27 compute-0 nova_compute[253512]: 2025-11-25 10:04:27.492 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:04:27 compute-0 nova_compute[253512]: 2025-11-25 10:04:27.492 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:04:27 compute-0 nova_compute[253512]: 2025-11-25 10:04:27.492 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:04:27 compute-0 nova_compute[253512]: 2025-11-25 10:04:27.493 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:04:27 compute-0 nova_compute[253512]: 2025-11-25 10:04:27.493 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:04:27 compute-0 ceph-mon[74207]: pgmap v998: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2853664828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1505420710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:04:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681770167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:27 compute-0 nova_compute[253512]: 2025-11-25 10:04:27.836 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:04:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.050255) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068050309, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 656, "num_deletes": 256, "total_data_size": 958373, "memory_usage": 972168, "flush_reason": "Manual Compaction"}
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068053372, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 946095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28435, "largest_seqno": 29090, "table_properties": {"data_size": 942585, "index_size": 1354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7391, "raw_average_key_size": 18, "raw_value_size": 935661, "raw_average_value_size": 2282, "num_data_blocks": 60, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764065018, "oldest_key_time": 1764065018, "file_creation_time": 1764065068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 3128 microseconds, and 2279 cpu microseconds.
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.053395) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 946095 bytes OK
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.053407) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.053977) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.053989) EVENT_LOG_v1 {"time_micros": 1764065068053985, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.054000) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 954965, prev total WAL file size 954965, number of live WAL files 2.
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.054356) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(923KB)], [62(14MB)]
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068054381, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 15853884, "oldest_snapshot_seqno": -1}
Nov 25 10:04:28 compute-0 nova_compute[253512]: 2025-11-25 10:04:28.078 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:04:28 compute-0 nova_compute[253512]: 2025-11-25 10:04:28.079 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4597MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:04:28 compute-0 nova_compute[253512]: 2025-11-25 10:04:28.079 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:04:28 compute-0 nova_compute[253512]: 2025-11-25 10:04:28.080 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6206 keys, 15728085 bytes, temperature: kUnknown
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068083115, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 15728085, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15684295, "index_size": 27168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 160947, "raw_average_key_size": 25, "raw_value_size": 15570191, "raw_average_value_size": 2508, "num_data_blocks": 1094, "num_entries": 6206, "num_filter_entries": 6206, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764065068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.083250) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 15728085 bytes
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.091397) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 551.1 rd, 546.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.2 +0.0 blob) out(15.0 +0.0 blob), read-write-amplify(33.4) write-amplify(16.6) OK, records in: 6729, records dropped: 523 output_compression: NoCompression
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.091412) EVENT_LOG_v1 {"time_micros": 1764065068091405, "job": 34, "event": "compaction_finished", "compaction_time_micros": 28770, "compaction_time_cpu_micros": 20860, "output_level": 6, "num_output_files": 1, "total_output_size": 15728085, "num_input_records": 6729, "num_output_records": 6206, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068091601, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068093236, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.054317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.093255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.093257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.093258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.093260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:04:28 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:04:28.093261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:04:28 compute-0 nova_compute[253512]: 2025-11-25 10:04:28.145 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:04:28 compute-0 nova_compute[253512]: 2025-11-25 10:04:28.145 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:04:28 compute-0 nova_compute[253512]: 2025-11-25 10:04:28.163 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:04:28 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v999: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:04:28 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2066540253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:28 compute-0 nova_compute[253512]: 2025-11-25 10:04:28.495 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:04:28 compute-0 nova_compute[253512]: 2025-11-25 10:04:28.499 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:04:28 compute-0 nova_compute[253512]: 2025-11-25 10:04:28.516 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:04:28 compute-0 nova_compute[253512]: 2025-11-25 10:04:28.518 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:04:28 compute-0 nova_compute[253512]: 2025-11-25 10:04:28.518 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:04:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:04:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:28.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:04:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:28.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/404904334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3681770167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1216490648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2066540253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:04:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:28.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:28.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:28.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:28.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:29 compute-0 nova_compute[253512]: 2025-11-25 10:04:29.519 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:29 compute-0 nova_compute[253512]: 2025-11-25 10:04:29.520 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:29 compute-0 nova_compute[253512]: 2025-11-25 10:04:29.520 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:29 compute-0 nova_compute[253512]: 2025-11-25 10:04:29.520 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:29 compute-0 ceph-mon[74207]: pgmap v999: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:29 compute-0 nova_compute[253512]: 2025-11-25 10:04:29.855 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:04:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:04:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:04:30] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:04:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:04:30] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:04:30 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1000: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:30.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:30.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:04:31 compute-0 nova_compute[253512]: 2025-11-25 10:04:31.218 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:31 compute-0 nova_compute[253512]: 2025-11-25 10:04:31.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:31 compute-0 nova_compute[253512]: 2025-11-25 10:04:31.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:04:31 compute-0 ceph-mon[74207]: pgmap v1000: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:32 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1001: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:32 compute-0 nova_compute[253512]: 2025-11-25 10:04:32.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:32 compute-0 nova_compute[253512]: 2025-11-25 10:04:32.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:04:32 compute-0 nova_compute[253512]: 2025-11-25 10:04:32.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:04:32 compute-0 nova_compute[253512]: 2025-11-25 10:04:32.489 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:04:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:04:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:32.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:04:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:32.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:33 compute-0 ceph-mon[74207]: pgmap v1001: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:33 compute-0 podman[281814]: 2025-11-25 10:04:33.978648649 +0000 UTC m=+0.041161882 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 10:04:34 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1002: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:34.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:34.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:34 compute-0 nova_compute[253512]: 2025-11-25 10:04:34.855 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:35 compute-0 nova_compute[253512]: 2025-11-25 10:04:35.485 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:35 compute-0 ceph-mon[74207]: pgmap v1002: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:36 compute-0 nova_compute[253512]: 2025-11-25 10:04:36.218 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:36 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1003: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:04:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:36.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:04:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:36.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:37.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:37.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:37.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:37.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:37 compute-0 nova_compute[253512]: 2025-11-25 10:04:37.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:04:37 compute-0 ceph-mon[74207]: pgmap v1003: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:38 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1004: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:38 compute-0 sudo[281835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:04:38 compute-0 sudo[281835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:38 compute-0 sudo[281835]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:38.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:38 compute-0 sudo[281860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:04:38 compute-0 sudo[281860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:38.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 10:04:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 10:04:38 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:38.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:38.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:38.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:38.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:38 compute-0 sudo[281860]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:04:39 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:04:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 10:04:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:04:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1005: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 10:04:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 10:04:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 10:04:39 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:04:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 10:04:39 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:04:39 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:04:39 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:04:39 compute-0 sudo[281915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:04:39 compute-0 sudo[281915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:39 compute-0 sudo[281915]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:39 compute-0 sudo[281940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 10:04:39 compute-0 sudo[281940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:39 compute-0 podman[281998]: 2025-11-25 10:04:39.668509942 +0000 UTC m=+0.027043539 container create 138ee2da82b3b6a2bfe7185fa36eeff3c05a6c83f78a727497303bc9161928e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_albattani, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 10:04:39 compute-0 systemd[1]: Started libpod-conmon-138ee2da82b3b6a2bfe7185fa36eeff3c05a6c83f78a727497303bc9161928e0.scope.
Nov 25 10:04:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:04:39 compute-0 podman[281998]: 2025-11-25 10:04:39.728719271 +0000 UTC m=+0.087252867 container init 138ee2da82b3b6a2bfe7185fa36eeff3c05a6c83f78a727497303bc9161928e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_albattani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:04:39 compute-0 podman[281998]: 2025-11-25 10:04:39.73339136 +0000 UTC m=+0.091924957 container start 138ee2da82b3b6a2bfe7185fa36eeff3c05a6c83f78a727497303bc9161928e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:04:39 compute-0 podman[281998]: 2025-11-25 10:04:39.734508345 +0000 UTC m=+0.093041941 container attach 138ee2da82b3b6a2bfe7185fa36eeff3c05a6c83f78a727497303bc9161928e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_albattani, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:04:39 compute-0 adoring_albattani[282011]: 167 167
Nov 25 10:04:39 compute-0 systemd[1]: libpod-138ee2da82b3b6a2bfe7185fa36eeff3c05a6c83f78a727497303bc9161928e0.scope: Deactivated successfully.
Nov 25 10:04:39 compute-0 conmon[282011]: conmon 138ee2da82b3b6a2bfe7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-138ee2da82b3b6a2bfe7185fa36eeff3c05a6c83f78a727497303bc9161928e0.scope/container/memory.events
Nov 25 10:04:39 compute-0 podman[281998]: 2025-11-25 10:04:39.737328337 +0000 UTC m=+0.095861933 container died 138ee2da82b3b6a2bfe7185fa36eeff3c05a6c83f78a727497303bc9161928e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 10:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ac4f854ef1553ca9d42d0b9106f737912b0861313339fff58c3e0e0f9a9adaa-merged.mount: Deactivated successfully.
Nov 25 10:04:39 compute-0 podman[281998]: 2025-11-25 10:04:39.657908163 +0000 UTC m=+0.016441779 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:04:39 compute-0 podman[281998]: 2025-11-25 10:04:39.755676797 +0000 UTC m=+0.114210392 container remove 138ee2da82b3b6a2bfe7185fa36eeff3c05a6c83f78a727497303bc9161928e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_albattani, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 10:04:39 compute-0 systemd[1]: libpod-conmon-138ee2da82b3b6a2bfe7185fa36eeff3c05a6c83f78a727497303bc9161928e0.scope: Deactivated successfully.
Nov 25 10:04:39 compute-0 ceph-mon[74207]: pgmap v1004: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:04:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:04:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:04:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:04:39 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:04:39 compute-0 nova_compute[253512]: 2025-11-25 10:04:39.858 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:39 compute-0 podman[282033]: 2025-11-25 10:04:39.879972345 +0000 UTC m=+0.031002326 container create 947a02234aa68743407eacc8e9ec28f2dc277a45c8674f023bb0c3463159aea2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:04:39 compute-0 systemd[1]: Started libpod-conmon-947a02234aa68743407eacc8e9ec28f2dc277a45c8674f023bb0c3463159aea2.scope.
Nov 25 10:04:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe1163f9b615a2a73867c3c390707b9ce62e130c9cc2535add91e6e9fc28f80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe1163f9b615a2a73867c3c390707b9ce62e130c9cc2535add91e6e9fc28f80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe1163f9b615a2a73867c3c390707b9ce62e130c9cc2535add91e6e9fc28f80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe1163f9b615a2a73867c3c390707b9ce62e130c9cc2535add91e6e9fc28f80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe1163f9b615a2a73867c3c390707b9ce62e130c9cc2535add91e6e9fc28f80/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:39 compute-0 podman[282033]: 2025-11-25 10:04:39.942010358 +0000 UTC m=+0.093040350 container init 947a02234aa68743407eacc8e9ec28f2dc277a45c8674f023bb0c3463159aea2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_feynman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 25 10:04:39 compute-0 podman[282033]: 2025-11-25 10:04:39.946765785 +0000 UTC m=+0.097795756 container start 947a02234aa68743407eacc8e9ec28f2dc277a45c8674f023bb0c3463159aea2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_feynman, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 10:04:39 compute-0 podman[282033]: 2025-11-25 10:04:39.947670259 +0000 UTC m=+0.098700230 container attach 947a02234aa68743407eacc8e9ec28f2dc277a45c8674f023bb0c3463159aea2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 10:04:39 compute-0 podman[282033]: 2025-11-25 10:04:39.86778486 +0000 UTC m=+0.018814851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:04:40 compute-0 hungry_feynman[282046]: --> passed data devices: 0 physical, 1 LVM
Nov 25 10:04:40 compute-0 hungry_feynman[282046]: --> All data devices are unavailable
Nov 25 10:04:40 compute-0 systemd[1]: libpod-947a02234aa68743407eacc8e9ec28f2dc277a45c8674f023bb0c3463159aea2.scope: Deactivated successfully.
Nov 25 10:04:40 compute-0 podman[282033]: 2025-11-25 10:04:40.208299169 +0000 UTC m=+0.359329150 container died 947a02234aa68743407eacc8e9ec28f2dc277a45c8674f023bb0c3463159aea2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:04:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fe1163f9b615a2a73867c3c390707b9ce62e130c9cc2535add91e6e9fc28f80-merged.mount: Deactivated successfully.
Nov 25 10:04:40 compute-0 podman[282033]: 2025-11-25 10:04:40.231406514 +0000 UTC m=+0.382436485 container remove 947a02234aa68743407eacc8e9ec28f2dc277a45c8674f023bb0c3463159aea2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_feynman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 10:04:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:04:40] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:04:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:04:40] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:04:40 compute-0 systemd[1]: libpod-conmon-947a02234aa68743407eacc8e9ec28f2dc277a45c8674f023bb0c3463159aea2.scope: Deactivated successfully.
Nov 25 10:04:40 compute-0 sudo[281940]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:40 compute-0 sudo[282072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:04:40 compute-0 sudo[282072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:40 compute-0 sudo[282072]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:40 compute-0 sudo[282097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 10:04:40 compute-0 sudo[282097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:40.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:40 compute-0 podman[282155]: 2025-11-25 10:04:40.638205886 +0000 UTC m=+0.027699355 container create 0192d3903d58d1ff394f1e4207f5fc28395b69066c33878bfa067e3022551dad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_curran, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:04:40 compute-0 systemd[1]: Started libpod-conmon-0192d3903d58d1ff394f1e4207f5fc28395b69066c33878bfa067e3022551dad.scope.
Nov 25 10:04:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:04:40 compute-0 podman[282155]: 2025-11-25 10:04:40.693747322 +0000 UTC m=+0.083240790 container init 0192d3903d58d1ff394f1e4207f5fc28395b69066c33878bfa067e3022551dad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_curran, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:04:40 compute-0 podman[282155]: 2025-11-25 10:04:40.698271414 +0000 UTC m=+0.087764882 container start 0192d3903d58d1ff394f1e4207f5fc28395b69066c33878bfa067e3022551dad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_curran, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:04:40 compute-0 podman[282155]: 2025-11-25 10:04:40.699334866 +0000 UTC m=+0.088828355 container attach 0192d3903d58d1ff394f1e4207f5fc28395b69066c33878bfa067e3022551dad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 10:04:40 compute-0 cool_curran[282167]: 167 167
Nov 25 10:04:40 compute-0 systemd[1]: libpod-0192d3903d58d1ff394f1e4207f5fc28395b69066c33878bfa067e3022551dad.scope: Deactivated successfully.
Nov 25 10:04:40 compute-0 conmon[282167]: conmon 0192d3903d58d1ff394f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0192d3903d58d1ff394f1e4207f5fc28395b69066c33878bfa067e3022551dad.scope/container/memory.events
Nov 25 10:04:40 compute-0 podman[282155]: 2025-11-25 10:04:40.702289452 +0000 UTC m=+0.091782970 container died 0192d3903d58d1ff394f1e4207f5fc28395b69066c33878bfa067e3022551dad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_curran, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:04:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4885b893a163421b9114f04254f744342876ae8249b44f3ddd3b45529a81d82f-merged.mount: Deactivated successfully.
Nov 25 10:04:40 compute-0 podman[282155]: 2025-11-25 10:04:40.627256433 +0000 UTC m=+0.016749921 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:04:40 compute-0 podman[282155]: 2025-11-25 10:04:40.724308717 +0000 UTC m=+0.113802185 container remove 0192d3903d58d1ff394f1e4207f5fc28395b69066c33878bfa067e3022551dad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:04:40 compute-0 systemd[1]: libpod-conmon-0192d3903d58d1ff394f1e4207f5fc28395b69066c33878bfa067e3022551dad.scope: Deactivated successfully.
Nov 25 10:04:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:40.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:40 compute-0 ceph-mon[74207]: pgmap v1005: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:40 compute-0 podman[282189]: 2025-11-25 10:04:40.8417474 +0000 UTC m=+0.028284867 container create aeffdad3eb71771cab62b4481d5c791407b1e350e006c6ae0bddd08d2ce403ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:04:40 compute-0 systemd[1]: Started libpod-conmon-aeffdad3eb71771cab62b4481d5c791407b1e350e006c6ae0bddd08d2ce403ca.scope.
Nov 25 10:04:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a902a027dcc6d8e40aa1b7b1c1b1cd46419de3a0cff54fb76eb2e0ee48165c72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a902a027dcc6d8e40aa1b7b1c1b1cd46419de3a0cff54fb76eb2e0ee48165c72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a902a027dcc6d8e40aa1b7b1c1b1cd46419de3a0cff54fb76eb2e0ee48165c72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a902a027dcc6d8e40aa1b7b1c1b1cd46419de3a0cff54fb76eb2e0ee48165c72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:40 compute-0 podman[282189]: 2025-11-25 10:04:40.908142219 +0000 UTC m=+0.094679686 container init aeffdad3eb71771cab62b4481d5c791407b1e350e006c6ae0bddd08d2ce403ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_black, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:04:40 compute-0 podman[282189]: 2025-11-25 10:04:40.912842352 +0000 UTC m=+0.099379819 container start aeffdad3eb71771cab62b4481d5c791407b1e350e006c6ae0bddd08d2ce403ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_black, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 25 10:04:40 compute-0 podman[282189]: 2025-11-25 10:04:40.914357145 +0000 UTC m=+0.100894613 container attach aeffdad3eb71771cab62b4481d5c791407b1e350e006c6ae0bddd08d2ce403ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:04:40 compute-0 podman[282189]: 2025-11-25 10:04:40.830936657 +0000 UTC m=+0.017474126 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:04:41 compute-0 kind_black[282202]: {
Nov 25 10:04:41 compute-0 kind_black[282202]:     "1": [
Nov 25 10:04:41 compute-0 kind_black[282202]:         {
Nov 25 10:04:41 compute-0 kind_black[282202]:             "devices": [
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "/dev/loop3"
Nov 25 10:04:41 compute-0 kind_black[282202]:             ],
Nov 25 10:04:41 compute-0 kind_black[282202]:             "lv_name": "ceph_lv0",
Nov 25 10:04:41 compute-0 kind_black[282202]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:04:41 compute-0 kind_black[282202]:             "lv_size": "21470642176",
Nov 25 10:04:41 compute-0 kind_black[282202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 10:04:41 compute-0 kind_black[282202]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:04:41 compute-0 kind_black[282202]:             "name": "ceph_lv0",
Nov 25 10:04:41 compute-0 kind_black[282202]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:04:41 compute-0 kind_black[282202]:             "tags": {
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.cluster_name": "ceph",
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.crush_device_class": "",
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.encrypted": "0",
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.osd_id": "1",
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.type": "block",
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.vdo": "0",
Nov 25 10:04:41 compute-0 kind_black[282202]:                 "ceph.with_tpm": "0"
Nov 25 10:04:41 compute-0 kind_black[282202]:             },
Nov 25 10:04:41 compute-0 kind_black[282202]:             "type": "block",
Nov 25 10:04:41 compute-0 kind_black[282202]:             "vg_name": "ceph_vg0"
Nov 25 10:04:41 compute-0 kind_black[282202]:         }
Nov 25 10:04:41 compute-0 kind_black[282202]:     ]
Nov 25 10:04:41 compute-0 kind_black[282202]: }
Nov 25 10:04:41 compute-0 systemd[1]: libpod-aeffdad3eb71771cab62b4481d5c791407b1e350e006c6ae0bddd08d2ce403ca.scope: Deactivated successfully.
Nov 25 10:04:41 compute-0 podman[282211]: 2025-11-25 10:04:41.173195473 +0000 UTC m=+0.016082332 container died aeffdad3eb71771cab62b4481d5c791407b1e350e006c6ae0bddd08d2ce403ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:04:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a902a027dcc6d8e40aa1b7b1c1b1cd46419de3a0cff54fb76eb2e0ee48165c72-merged.mount: Deactivated successfully.
Nov 25 10:04:41 compute-0 podman[282211]: 2025-11-25 10:04:41.191602504 +0000 UTC m=+0.034489353 container remove aeffdad3eb71771cab62b4481d5c791407b1e350e006c6ae0bddd08d2ce403ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_black, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:04:41 compute-0 systemd[1]: libpod-conmon-aeffdad3eb71771cab62b4481d5c791407b1e350e006c6ae0bddd08d2ce403ca.scope: Deactivated successfully.
Nov 25 10:04:41 compute-0 sudo[282097]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:41 compute-0 nova_compute[253512]: 2025-11-25 10:04:41.219 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:41 compute-0 sudo[282224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:04:41 compute-0 sudo[282224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:41 compute-0 sudo[282224]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1006: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 845 B/s rd, 0 op/s
Nov 25 10:04:41 compute-0 sudo[282249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 10:04:41 compute-0 sudo[282249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:41 compute-0 podman[282305]: 2025-11-25 10:04:41.618088166 +0000 UTC m=+0.030327805 container create c07f866ea57607c57558cb122a2a99530e34144e425f6c53285ce11feef57342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 25 10:04:41 compute-0 systemd[1]: Started libpod-conmon-c07f866ea57607c57558cb122a2a99530e34144e425f6c53285ce11feef57342.scope.
Nov 25 10:04:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:04:41 compute-0 podman[282305]: 2025-11-25 10:04:41.678444261 +0000 UTC m=+0.090683890 container init c07f866ea57607c57558cb122a2a99530e34144e425f6c53285ce11feef57342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:04:41 compute-0 podman[282305]: 2025-11-25 10:04:41.682964766 +0000 UTC m=+0.095204384 container start c07f866ea57607c57558cb122a2a99530e34144e425f6c53285ce11feef57342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:04:41 compute-0 podman[282305]: 2025-11-25 10:04:41.684433642 +0000 UTC m=+0.096673271 container attach c07f866ea57607c57558cb122a2a99530e34144e425f6c53285ce11feef57342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 25 10:04:41 compute-0 heuristic_montalcini[282320]: 167 167
Nov 25 10:04:41 compute-0 systemd[1]: libpod-c07f866ea57607c57558cb122a2a99530e34144e425f6c53285ce11feef57342.scope: Deactivated successfully.
Nov 25 10:04:41 compute-0 podman[282305]: 2025-11-25 10:04:41.686846947 +0000 UTC m=+0.099086577 container died c07f866ea57607c57558cb122a2a99530e34144e425f6c53285ce11feef57342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:04:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7402ff0b53e8e994d9627550b6e14f8cafb4acb1f355f0155f0a3ab1f11dc3ac-merged.mount: Deactivated successfully.
Nov 25 10:04:41 compute-0 podman[282305]: 2025-11-25 10:04:41.606251722 +0000 UTC m=+0.018491371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:04:41 compute-0 podman[282305]: 2025-11-25 10:04:41.707669488 +0000 UTC m=+0.119909118 container remove c07f866ea57607c57558cb122a2a99530e34144e425f6c53285ce11feef57342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:04:41 compute-0 systemd[1]: libpod-conmon-c07f866ea57607c57558cb122a2a99530e34144e425f6c53285ce11feef57342.scope: Deactivated successfully.
Nov 25 10:04:41 compute-0 podman[282343]: 2025-11-25 10:04:41.832968727 +0000 UTC m=+0.028759251 container create 5f1c8315c0b042888ea7f8a579c2b7cd8b412fa9f99082124f46d82a9e1ae1a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 10:04:41 compute-0 systemd[1]: Started libpod-conmon-5f1c8315c0b042888ea7f8a579c2b7cd8b412fa9f99082124f46d82a9e1ae1a3.scope.
Nov 25 10:04:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f148195a66c36f56086ce03dfeaa163e722b5fd422fc45499c6281075c6ee6f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f148195a66c36f56086ce03dfeaa163e722b5fd422fc45499c6281075c6ee6f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f148195a66c36f56086ce03dfeaa163e722b5fd422fc45499c6281075c6ee6f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f148195a66c36f56086ce03dfeaa163e722b5fd422fc45499c6281075c6ee6f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:04:41 compute-0 podman[282343]: 2025-11-25 10:04:41.891688991 +0000 UTC m=+0.087479514 container init 5f1c8315c0b042888ea7f8a579c2b7cd8b412fa9f99082124f46d82a9e1ae1a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_saha, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Nov 25 10:04:41 compute-0 podman[282343]: 2025-11-25 10:04:41.896632282 +0000 UTC m=+0.092422805 container start 5f1c8315c0b042888ea7f8a579c2b7cd8b412fa9f99082124f46d82a9e1ae1a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:04:41 compute-0 podman[282343]: 2025-11-25 10:04:41.897684915 +0000 UTC m=+0.093475438 container attach 5f1c8315c0b042888ea7f8a579c2b7cd8b412fa9f99082124f46d82a9e1ae1a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_saha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 25 10:04:41 compute-0 podman[282343]: 2025-11-25 10:04:41.820511473 +0000 UTC m=+0.016302017 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:04:42 compute-0 charming_saha[282355]: {}
Nov 25 10:04:42 compute-0 lvm[282440]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:04:42 compute-0 lvm[282440]: VG ceph_vg0 finished
Nov 25 10:04:42 compute-0 systemd[1]: libpod-5f1c8315c0b042888ea7f8a579c2b7cd8b412fa9f99082124f46d82a9e1ae1a3.scope: Deactivated successfully.
Nov 25 10:04:42 compute-0 podman[282343]: 2025-11-25 10:04:42.405875754 +0000 UTC m=+0.601666268 container died 5f1c8315c0b042888ea7f8a579c2b7cd8b412fa9f99082124f46d82a9e1ae1a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:04:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f148195a66c36f56086ce03dfeaa163e722b5fd422fc45499c6281075c6ee6f0-merged.mount: Deactivated successfully.
Nov 25 10:04:42 compute-0 podman[282343]: 2025-11-25 10:04:42.43176587 +0000 UTC m=+0.627556393 container remove 5f1c8315c0b042888ea7f8a579c2b7cd8b412fa9f99082124f46d82a9e1ae1a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_saha, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 10:04:42 compute-0 systemd[1]: libpod-conmon-5f1c8315c0b042888ea7f8a579c2b7cd8b412fa9f99082124f46d82a9e1ae1a3.scope: Deactivated successfully.
Nov 25 10:04:42 compute-0 sudo[282249]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:04:42 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:42 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:04:42 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:42 compute-0 podman[282431]: 2025-11-25 10:04:42.486963898 +0000 UTC m=+0.116625101 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:04:42 compute-0 sudo[282469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:04:42 compute-0 sudo[282469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:42 compute-0 sudo[282469]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:42.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:42.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:42 compute-0 ceph-mon[74207]: pgmap v1006: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 845 B/s rd, 0 op/s
Nov 25 10:04:42 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:42 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:04:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1007: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:43 compute-0 sudo[282494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:04:43 compute-0 sudo[282494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:04:43 compute-0 sudo[282494]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:04:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:44.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:04:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:44.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:44 compute-0 ceph-mon[74207]: pgmap v1007: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:44 compute-0 nova_compute[253512]: 2025-11-25 10:04:44.859 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_10:04:44
Nov 25 10:04:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:04:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 10:04:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.mgr', 'vms', '.nfs', 'default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.meta']
Nov 25 10:04:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 10:04:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:04:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:04:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:04:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:04:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:04:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:04:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:04:44 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:04:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:04:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:04:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:04:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:04:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:04:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:04:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:04:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:04:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:04:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:04:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1008: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:04:46 compute-0 nova_compute[253512]: 2025-11-25 10:04:46.221 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:04:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:46.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:04:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:46.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:46 compute-0 ceph-mon[74207]: pgmap v1008: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:46 compute-0 ceph-mgr[74476]: [devicehealth INFO root] Check health
Nov 25 10:04:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:47.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:47.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:47.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:47.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1009: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:48.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:04:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:48.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:04:48 compute-0 ceph-mon[74207]: pgmap v1009: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:48.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:48.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:48.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:48.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:49 compute-0 podman[282525]: 2025-11-25 10:04:49.00641625 +0000 UTC m=+0.066918776 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 25 10:04:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1010: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:49 compute-0 nova_compute[253512]: 2025-11-25 10:04:49.860 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:04:50] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:04:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:04:50] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:04:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:50.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:50.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:50 compute-0 ceph-mon[74207]: pgmap v1010: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Nov 25 10:04:51 compute-0 nova_compute[253512]: 2025-11-25 10:04:51.222 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1011: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:52.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:04:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:52.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:04:52 compute-0 ceph-mon[74207]: pgmap v1011: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1012: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:04:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:54.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:04:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:54.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:54 compute-0 nova_compute[253512]: 2025-11-25 10:04:54.861 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:54 compute-0 ceph-mon[74207]: pgmap v1012: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/822873780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:04:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/822873780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1013: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:04:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:04:56 compute-0 nova_compute[253512]: 2025-11-25 10:04:56.223 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:56.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:04:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:56.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:04:56 compute-0 ceph-mon[74207]: pgmap v1013: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:57.081Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:57.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:57.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:57.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1014: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:04:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:58.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:04:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:04:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:58.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:04:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:58.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:58.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:58.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:04:58.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:04:58 compute-0 ceph-mon[74207]: pgmap v1014: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:04:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1015: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:04:59 compute-0 nova_compute[253512]: 2025-11-25 10:04:59.863 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:04:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:04:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:05:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:05:00] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:05:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:05:00] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:05:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:00.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:05:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:00.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:05:00 compute-0 ceph-mon[74207]: pgmap v1015: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:05:01 compute-0 nova_compute[253512]: 2025-11-25 10:05:01.224 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1016: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:02.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:05:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:02.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:05:02 compute-0 ceph-mon[74207]: pgmap v1016: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1017: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:03 compute-0 sudo[282557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:05:03 compute-0 sudo[282557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:03 compute-0 sudo[282557]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:04.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:04.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:04 compute-0 nova_compute[253512]: 2025-11-25 10:05:04.865 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:04 compute-0 ceph-mon[74207]: pgmap v1017: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:04 compute-0 podman[282584]: 2025-11-25 10:05:04.992222068 +0000 UTC m=+0.049677672 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 25 10:05:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1018: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:05:05.392 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:05:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:05:05.393 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:05:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:05:05.393 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:05:06 compute-0 nova_compute[253512]: 2025-11-25 10:05:06.225 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:05:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:06.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:05:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:05:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:06.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:05:06 compute-0 ceph-mon[74207]: pgmap v1018: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:07.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:07.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:07.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:07.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1019: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:05:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:08.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:05:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:08.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:08.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:08.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:08.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:08.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:08 compute-0 ceph-mon[74207]: pgmap v1019: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1020: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:09 compute-0 nova_compute[253512]: 2025-11-25 10:05:09.867 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:05:10] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:05:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:05:10] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:05:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:05:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:10.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:05:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:10.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:10 compute-0 ceph-mon[74207]: pgmap v1020: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:11 compute-0 nova_compute[253512]: 2025-11-25 10:05:11.228 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1021: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:05:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:12.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:05:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:05:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:12.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:05:12 compute-0 ceph-mon[74207]: pgmap v1021: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:12 compute-0 podman[282608]: 2025-11-25 10:05:12.994833413 +0000 UTC m=+0.059240925 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 10:05:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:14.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:14.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:14 compute-0 nova_compute[253512]: 2025-11-25 10:05:14.869 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:14 compute-0 ceph-mon[74207]: pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:05:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:05:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:05:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:05:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:05:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:05:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:05:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:05:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1023: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:05:16 compute-0 nova_compute[253512]: 2025-11-25 10:05:16.229 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:16.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:05:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:16.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:05:16 compute-0 ceph-mon[74207]: pgmap v1023: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:17.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:17.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:17.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:17.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1024: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:05:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:18.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:05:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:18.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:18.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:18.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:18.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:18.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:18 compute-0 ceph-mon[74207]: pgmap v1024: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1025: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:19 compute-0 nova_compute[253512]: 2025-11-25 10:05:19.873 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:19 compute-0 ceph-mon[74207]: pgmap v1025: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:19 compute-0 podman[282639]: 2025-11-25 10:05:19.985705442 +0000 UTC m=+0.048905138 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:05:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:05:20] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:05:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:05:20] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:05:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:20.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:20.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:21 compute-0 nova_compute[253512]: 2025-11-25 10:05:21.231 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1026: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:22 compute-0 ceph-mon[74207]: pgmap v1026: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:22.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:22.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1027: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:23 compute-0 sudo[282660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:05:23 compute-0 sudo[282660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:23 compute-0 sudo[282660]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:24 compute-0 ceph-mon[74207]: pgmap v1027: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.373668) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124373695, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 762, "num_deletes": 251, "total_data_size": 1176224, "memory_usage": 1206416, "flush_reason": "Manual Compaction"}
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124377067, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1160685, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29091, "largest_seqno": 29852, "table_properties": {"data_size": 1156760, "index_size": 1705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8970, "raw_average_key_size": 19, "raw_value_size": 1148812, "raw_average_value_size": 2524, "num_data_blocks": 74, "num_entries": 455, "num_filter_entries": 455, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764065068, "oldest_key_time": 1764065068, "file_creation_time": 1764065124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 3422 microseconds, and 2531 cpu microseconds.
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377091) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1160685 bytes OK
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377101) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377623) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377633) EVENT_LOG_v1 {"time_micros": 1764065124377630, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377642) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1172427, prev total WAL file size 1172427, number of live WAL files 2.
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377956) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1133KB)], [65(14MB)]
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124377973, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16888770, "oldest_snapshot_seqno": -1}
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6143 keys, 14784412 bytes, temperature: kUnknown
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124411609, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14784412, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14741850, "index_size": 26132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 160297, "raw_average_key_size": 26, "raw_value_size": 14629513, "raw_average_value_size": 2381, "num_data_blocks": 1047, "num_entries": 6143, "num_filter_entries": 6143, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764065124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.411742) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14784412 bytes
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.413691) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 501.6 rd, 439.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 15.0 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(27.3) write-amplify(12.7) OK, records in: 6661, records dropped: 518 output_compression: NoCompression
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.413704) EVENT_LOG_v1 {"time_micros": 1764065124413698, "job": 36, "event": "compaction_finished", "compaction_time_micros": 33670, "compaction_time_cpu_micros": 21339, "output_level": 6, "num_output_files": 1, "total_output_size": 14784412, "num_input_records": 6661, "num_output_records": 6143, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124413939, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124416007, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.416043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.416132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.416133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.416134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:05:24 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:05:24.416135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:05:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:24.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:24.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:24 compute-0 nova_compute[253512]: 2025-11-25 10:05:24.874 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=cleanup t=2025-11-25T10:05:25.181348212Z level=info msg="Completed cleanup jobs" duration=2.404569ms
Nov 25 10:05:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=plugins.update.checker t=2025-11-25T10:05:25.270933583Z level=info msg="Update check succeeded" duration=31.924302ms
Nov 25 10:05:25 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=grafana.update.checker t=2025-11-25T10:05:25.274262031Z level=info msg="Update check succeeded" duration=46.867568ms
Nov 25 10:05:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1028: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:26 compute-0 nova_compute[253512]: 2025-11-25 10:05:26.233 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:26 compute-0 ceph-mon[74207]: pgmap v1028: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:05:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:26.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:05:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:26.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:27.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:27.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:27.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:27.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1029: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2026839709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:27 compute-0 nova_compute[253512]: 2025-11-25 10:05:27.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:27 compute-0 nova_compute[253512]: 2025-11-25 10:05:27.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:27 compute-0 nova_compute[253512]: 2025-11-25 10:05:27.486 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:05:27 compute-0 nova_compute[253512]: 2025-11-25 10:05:27.486 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:05:27 compute-0 nova_compute[253512]: 2025-11-25 10:05:27.487 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:05:27 compute-0 nova_compute[253512]: 2025-11-25 10:05:27.487 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:05:27 compute-0 nova_compute[253512]: 2025-11-25 10:05:27.487 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:05:27 compute-0 nova_compute[253512]: 2025-11-25 10:05:27.830 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:05:28 compute-0 nova_compute[253512]: 2025-11-25 10:05:28.021 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:05:28 compute-0 nova_compute[253512]: 2025-11-25 10:05:28.022 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4589MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:05:28 compute-0 nova_compute[253512]: 2025-11-25 10:05:28.022 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:05:28 compute-0 nova_compute[253512]: 2025-11-25 10:05:28.022 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:05:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:28 compute-0 nova_compute[253512]: 2025-11-25 10:05:28.095 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:05:28 compute-0 nova_compute[253512]: 2025-11-25 10:05:28.095 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:05:28 compute-0 nova_compute[253512]: 2025-11-25 10:05:28.112 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:05:28 compute-0 ceph-mon[74207]: pgmap v1029: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/251651596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3162503918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1737718634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2471864516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:05:28 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/879975490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:28 compute-0 nova_compute[253512]: 2025-11-25 10:05:28.460 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:05:28 compute-0 nova_compute[253512]: 2025-11-25 10:05:28.464 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:05:28 compute-0 nova_compute[253512]: 2025-11-25 10:05:28.480 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:05:28 compute-0 nova_compute[253512]: 2025-11-25 10:05:28.481 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:05:28 compute-0 nova_compute[253512]: 2025-11-25 10:05:28.482 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:05:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:28.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:28.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:28.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:28.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:28.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:28.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1030: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:29 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/879975490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:05:29 compute-0 nova_compute[253512]: 2025-11-25 10:05:29.482 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:29 compute-0 nova_compute[253512]: 2025-11-25 10:05:29.482 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:29 compute-0 nova_compute[253512]: 2025-11-25 10:05:29.875 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:05:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:05:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:05:30] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:05:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:05:30] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:05:30 compute-0 ceph-mon[74207]: pgmap v1030: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:05:30 compute-0 nova_compute[253512]: 2025-11-25 10:05:30.467 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:30.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:30.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:31 compute-0 nova_compute[253512]: 2025-11-25 10:05:31.234 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1031: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:05:31 compute-0 nova_compute[253512]: 2025-11-25 10:05:31.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:32 compute-0 ceph-mon[74207]: pgmap v1031: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:05:32 compute-0 nova_compute[253512]: 2025-11-25 10:05:32.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:32 compute-0 nova_compute[253512]: 2025-11-25 10:05:32.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:05:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:32.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:32.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1032: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:34 compute-0 ceph-mon[74207]: pgmap v1032: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:34 compute-0 nova_compute[253512]: 2025-11-25 10:05:34.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:34 compute-0 nova_compute[253512]: 2025-11-25 10:05:34.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:05:34 compute-0 nova_compute[253512]: 2025-11-25 10:05:34.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:05:34 compute-0 nova_compute[253512]: 2025-11-25 10:05:34.484 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:05:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:34.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:34.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:34 compute-0 nova_compute[253512]: 2025-11-25 10:05:34.877 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1033: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:35 compute-0 podman[282741]: 2025-11-25 10:05:35.972388108 +0000 UTC m=+0.034682618 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:05:36 compute-0 nova_compute[253512]: 2025-11-25 10:05:36.237 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:36 compute-0 ceph-mon[74207]: pgmap v1033: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:36.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:36.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:37.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:37.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:37.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:37.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:05:37 compute-0 nova_compute[253512]: 2025-11-25 10:05:37.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:05:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:38 compute-0 ceph-mon[74207]: pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:05:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:38.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:05:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:38.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:05:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:38.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:38.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:38.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:38.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1035: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:39 compute-0 nova_compute[253512]: 2025-11-25 10:05:39.880 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:05:40] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:05:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:05:40] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:05:40 compute-0 ceph-mon[74207]: pgmap v1035: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:40.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:40.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:41 compute-0 nova_compute[253512]: 2025-11-25 10:05:41.239 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1036: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:05:42 compute-0 ceph-mon[74207]: pgmap v1036: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:05:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:05:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:42.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:05:42 compute-0 sudo[282765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:05:42 compute-0 sudo[282765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:42 compute-0 sudo[282765]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:42 compute-0 sudo[282790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:05:42 compute-0 sudo[282790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:42.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:43 compute-0 sudo[282790]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:43 compute-0 sudo[282845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:05:43 compute-0 sudo[282845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:43 compute-0 sudo[282845]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:43 compute-0 sudo[282876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 25 10:05:43 compute-0 sudo[282876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1037: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:43 compute-0 podman[282869]: 2025-11-25 10:05:43.327481682 +0000 UTC m=+0.064399371 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:43 compute-0 sudo[282876]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:05:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1038: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 600 B/s rd, 0 op/s
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:05:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:05:43 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:05:43 compute-0 sudo[282933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:05:43 compute-0 sudo[282933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:43 compute-0 sudo[282933]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:43 compute-0 sudo[282958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 10:05:43 compute-0 sudo[282958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:43 compute-0 sudo[282995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:05:43 compute-0 sudo[282995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:43 compute-0 sudo[282995]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:43 compute-0 podman[283041]: 2025-11-25 10:05:43.928284743 +0000 UTC m=+0.029001978 container create 1dead739c882b375c4acea4247ed17c79fc7c04eaee0210c026176762e9630c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_fermi, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 25 10:05:43 compute-0 systemd[1]: Started libpod-conmon-1dead739c882b375c4acea4247ed17c79fc7c04eaee0210c026176762e9630c6.scope.
Nov 25 10:05:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:05:43 compute-0 podman[283041]: 2025-11-25 10:05:43.981509485 +0000 UTC m=+0.082226730 container init 1dead739c882b375c4acea4247ed17c79fc7c04eaee0210c026176762e9630c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:05:43 compute-0 podman[283041]: 2025-11-25 10:05:43.986048344 +0000 UTC m=+0.086765579 container start 1dead739c882b375c4acea4247ed17c79fc7c04eaee0210c026176762e9630c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_fermi, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:05:43 compute-0 podman[283041]: 2025-11-25 10:05:43.987247131 +0000 UTC m=+0.087964376 container attach 1dead739c882b375c4acea4247ed17c79fc7c04eaee0210c026176762e9630c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 25 10:05:43 compute-0 objective_fermi[283055]: 167 167
Nov 25 10:05:43 compute-0 systemd[1]: libpod-1dead739c882b375c4acea4247ed17c79fc7c04eaee0210c026176762e9630c6.scope: Deactivated successfully.
Nov 25 10:05:43 compute-0 conmon[283055]: conmon 1dead739c882b375c4ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1dead739c882b375c4acea4247ed17c79fc7c04eaee0210c026176762e9630c6.scope/container/memory.events
Nov 25 10:05:43 compute-0 podman[283041]: 2025-11-25 10:05:43.990226935 +0000 UTC m=+0.090944180 container died 1dead739c882b375c4acea4247ed17c79fc7c04eaee0210c026176762e9630c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_fermi, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e05ec764fb78b74f367d7d53764fb1783b38103b18128a4e503fbddc3a6bd064-merged.mount: Deactivated successfully.
Nov 25 10:05:44 compute-0 podman[283041]: 2025-11-25 10:05:44.010527122 +0000 UTC m=+0.111244367 container remove 1dead739c882b375c4acea4247ed17c79fc7c04eaee0210c026176762e9630c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:05:44 compute-0 podman[283041]: 2025-11-25 10:05:43.915538975 +0000 UTC m=+0.016256240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:05:44 compute-0 systemd[1]: libpod-conmon-1dead739c882b375c4acea4247ed17c79fc7c04eaee0210c026176762e9630c6.scope: Deactivated successfully.
Nov 25 10:05:44 compute-0 podman[283076]: 2025-11-25 10:05:44.126655426 +0000 UTC m=+0.027845950 container create a1f68f5d984210ae0c68dc4db21a73aa878e2675478fbb6225a73a50aeeeee51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:05:44 compute-0 systemd[1]: Started libpod-conmon-a1f68f5d984210ae0c68dc4db21a73aa878e2675478fbb6225a73a50aeeeee51.scope.
Nov 25 10:05:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddb4e06826b9a18cf55fc73328bb4f94978f65b478cbc094ad9b851eee23f33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddb4e06826b9a18cf55fc73328bb4f94978f65b478cbc094ad9b851eee23f33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddb4e06826b9a18cf55fc73328bb4f94978f65b478cbc094ad9b851eee23f33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddb4e06826b9a18cf55fc73328bb4f94978f65b478cbc094ad9b851eee23f33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddb4e06826b9a18cf55fc73328bb4f94978f65b478cbc094ad9b851eee23f33/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:44 compute-0 podman[283076]: 2025-11-25 10:05:44.187283693 +0000 UTC m=+0.088474227 container init a1f68f5d984210ae0c68dc4db21a73aa878e2675478fbb6225a73a50aeeeee51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kalam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:05:44 compute-0 podman[283076]: 2025-11-25 10:05:44.1916772 +0000 UTC m=+0.092867713 container start a1f68f5d984210ae0c68dc4db21a73aa878e2675478fbb6225a73a50aeeeee51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 10:05:44 compute-0 podman[283076]: 2025-11-25 10:05:44.192854537 +0000 UTC m=+0.094045071 container attach a1f68f5d984210ae0c68dc4db21a73aa878e2675478fbb6225a73a50aeeeee51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kalam, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 25 10:05:44 compute-0 podman[283076]: 2025-11-25 10:05:44.115931717 +0000 UTC m=+0.017122252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:05:44 compute-0 zen_kalam[283090]: --> passed data devices: 0 physical, 1 LVM
Nov 25 10:05:44 compute-0 zen_kalam[283090]: --> All data devices are unavailable
Nov 25 10:05:44 compute-0 systemd[1]: libpod-a1f68f5d984210ae0c68dc4db21a73aa878e2675478fbb6225a73a50aeeeee51.scope: Deactivated successfully.
Nov 25 10:05:44 compute-0 podman[283076]: 2025-11-25 10:05:44.441115371 +0000 UTC m=+0.342305895 container died a1f68f5d984210ae0c68dc4db21a73aa878e2675478fbb6225a73a50aeeeee51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 10:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ddb4e06826b9a18cf55fc73328bb4f94978f65b478cbc094ad9b851eee23f33-merged.mount: Deactivated successfully.
Nov 25 10:05:44 compute-0 podman[283076]: 2025-11-25 10:05:44.466248652 +0000 UTC m=+0.367439165 container remove a1f68f5d984210ae0c68dc4db21a73aa878e2675478fbb6225a73a50aeeeee51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_kalam, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 25 10:05:44 compute-0 systemd[1]: libpod-conmon-a1f68f5d984210ae0c68dc4db21a73aa878e2675478fbb6225a73a50aeeeee51.scope: Deactivated successfully.
Nov 25 10:05:44 compute-0 sudo[282958]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:44 compute-0 ceph-mon[74207]: pgmap v1037: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:05:44 compute-0 ceph-mon[74207]: pgmap v1038: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 600 B/s rd, 0 op/s
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:05:44 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:05:44 compute-0 sudo[283115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:05:44 compute-0 sudo[283115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:44 compute-0 sudo[283115]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:44 compute-0 sudo[283140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 10:05:44 compute-0 sudo[283140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:05:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:44.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:05:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:44.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:44 compute-0 podman[283195]: 2025-11-25 10:05:44.862781837 +0000 UTC m=+0.025768979 container create e18d0968310394c0c2f8881c884ea08849aa116588350916f6f90cfce9f48fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:05:44 compute-0 nova_compute[253512]: 2025-11-25 10:05:44.881 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:44 compute-0 systemd[1]: Started libpod-conmon-e18d0968310394c0c2f8881c884ea08849aa116588350916f6f90cfce9f48fc9.scope.
Nov 25 10:05:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:05:44 compute-0 podman[283195]: 2025-11-25 10:05:44.918767459 +0000 UTC m=+0.081754620 container init e18d0968310394c0c2f8881c884ea08849aa116588350916f6f90cfce9f48fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:05:44 compute-0 podman[283195]: 2025-11-25 10:05:44.923367804 +0000 UTC m=+0.086354945 container start e18d0968310394c0c2f8881c884ea08849aa116588350916f6f90cfce9f48fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_northcutt, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:05:44 compute-0 podman[283195]: 2025-11-25 10:05:44.924665228 +0000 UTC m=+0.087652379 container attach e18d0968310394c0c2f8881c884ea08849aa116588350916f6f90cfce9f48fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_northcutt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 25 10:05:44 compute-0 objective_northcutt[283208]: 167 167
Nov 25 10:05:44 compute-0 systemd[1]: libpod-e18d0968310394c0c2f8881c884ea08849aa116588350916f6f90cfce9f48fc9.scope: Deactivated successfully.
Nov 25 10:05:44 compute-0 podman[283195]: 2025-11-25 10:05:44.92646083 +0000 UTC m=+0.089447981 container died e18d0968310394c0c2f8881c884ea08849aa116588350916f6f90cfce9f48fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_northcutt, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f073e982e964fce5575b958a4e7d2eab2a7b20e894aedeaf06fd244c0ba00487-merged.mount: Deactivated successfully.
Nov 25 10:05:44 compute-0 podman[283195]: 2025-11-25 10:05:44.945080712 +0000 UTC m=+0.108067853 container remove e18d0968310394c0c2f8881c884ea08849aa116588350916f6f90cfce9f48fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_northcutt, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:05:44 compute-0 podman[283195]: 2025-11-25 10:05:44.852440248 +0000 UTC m=+0.015427409 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:05:44 compute-0 systemd[1]: libpod-conmon-e18d0968310394c0c2f8881c884ea08849aa116588350916f6f90cfce9f48fc9.scope: Deactivated successfully.
Nov 25 10:05:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_10:05:44
Nov 25 10:05:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:05:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 10:05:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', '.nfs', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'backups']
Nov 25 10:05:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 10:05:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:05:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:05:45 compute-0 podman[283230]: 2025-11-25 10:05:45.065215594 +0000 UTC m=+0.029063865 container create d4a1596142f8554e49b57c0ce1ec7ffd4caa30f9202a2441d8100483b827cbc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_swanson, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 10:05:45 compute-0 systemd[1]: Started libpod-conmon-d4a1596142f8554e49b57c0ce1ec7ffd4caa30f9202a2441d8100483b827cbc9.scope.
Nov 25 10:05:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d200fc496e90b4b0d55274eff116685dcfba5e9203059512d144c885767c8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d200fc496e90b4b0d55274eff116685dcfba5e9203059512d144c885767c8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d200fc496e90b4b0d55274eff116685dcfba5e9203059512d144c885767c8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d200fc496e90b4b0d55274eff116685dcfba5e9203059512d144c885767c8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:05:45 compute-0 podman[283230]: 2025-11-25 10:05:45.117727043 +0000 UTC m=+0.081575324 container init d4a1596142f8554e49b57c0ce1ec7ffd4caa30f9202a2441d8100483b827cbc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_swanson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:05:45 compute-0 podman[283230]: 2025-11-25 10:05:45.123785845 +0000 UTC m=+0.087634105 container start d4a1596142f8554e49b57c0ce1ec7ffd4caa30f9202a2441d8100483b827cbc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_swanson, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 10:05:45 compute-0 podman[283230]: 2025-11-25 10:05:45.125069112 +0000 UTC m=+0.088917373 container attach d4a1596142f8554e49b57c0ce1ec7ffd4caa30f9202a2441d8100483b827cbc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_swanson, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:05:45 compute-0 podman[283230]: 2025-11-25 10:05:45.054542329 +0000 UTC m=+0.018390601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:05:45 compute-0 objective_swanson[283243]: {
Nov 25 10:05:45 compute-0 objective_swanson[283243]:     "1": [
Nov 25 10:05:45 compute-0 objective_swanson[283243]:         {
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             "devices": [
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "/dev/loop3"
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             ],
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             "lv_name": "ceph_lv0",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             "lv_size": "21470642176",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             "name": "ceph_lv0",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             "tags": {
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.cluster_name": "ceph",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.crush_device_class": "",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.encrypted": "0",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.osd_id": "1",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.type": "block",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.vdo": "0",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:                 "ceph.with_tpm": "0"
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             },
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             "type": "block",
Nov 25 10:05:45 compute-0 objective_swanson[283243]:             "vg_name": "ceph_vg0"
Nov 25 10:05:45 compute-0 objective_swanson[283243]:         }
Nov 25 10:05:45 compute-0 objective_swanson[283243]:     ]
Nov 25 10:05:45 compute-0 objective_swanson[283243]: }
Nov 25 10:05:45 compute-0 systemd[1]: libpod-d4a1596142f8554e49b57c0ce1ec7ffd4caa30f9202a2441d8100483b827cbc9.scope: Deactivated successfully.
Nov 25 10:05:45 compute-0 podman[283230]: 2025-11-25 10:05:45.362987556 +0000 UTC m=+0.326835827 container died d4a1596142f8554e49b57c0ce1ec7ffd4caa30f9202a2441d8100483b827cbc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_swanson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 25 10:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-37d200fc496e90b4b0d55274eff116685dcfba5e9203059512d144c885767c8f-merged.mount: Deactivated successfully.
Nov 25 10:05:45 compute-0 podman[283230]: 2025-11-25 10:05:45.38360448 +0000 UTC m=+0.347452730 container remove d4a1596142f8554e49b57c0ce1ec7ffd4caa30f9202a2441d8100483b827cbc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 10:05:45 compute-0 systemd[1]: libpod-conmon-d4a1596142f8554e49b57c0ce1ec7ffd4caa30f9202a2441d8100483b827cbc9.scope: Deactivated successfully.
Nov 25 10:05:45 compute-0 sudo[283140]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:45 compute-0 sudo[283262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:05:45 compute-0 sudo[283262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:45 compute-0 sudo[283262]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:45 compute-0 sudo[283287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 10:05:45 compute-0 sudo[283287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1039: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 600 B/s rd, 0 op/s
Nov 25 10:05:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:05:45 compute-0 podman[283345]: 2025-11-25 10:05:45.788165656 +0000 UTC m=+0.026085475 container create 7b3be3b7a6f844c2f1976f9dc05ff030438aaba39221cc09d3a2115458c349a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 10:05:45 compute-0 systemd[1]: Started libpod-conmon-7b3be3b7a6f844c2f1976f9dc05ff030438aaba39221cc09d3a2115458c349a5.scope.
Nov 25 10:05:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:05:45 compute-0 podman[283345]: 2025-11-25 10:05:45.833465972 +0000 UTC m=+0.071385790 container init 7b3be3b7a6f844c2f1976f9dc05ff030438aaba39221cc09d3a2115458c349a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_blackwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 10:05:45 compute-0 podman[283345]: 2025-11-25 10:05:45.837705627 +0000 UTC m=+0.075625446 container start 7b3be3b7a6f844c2f1976f9dc05ff030438aaba39221cc09d3a2115458c349a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 10:05:45 compute-0 podman[283345]: 2025-11-25 10:05:45.838934763 +0000 UTC m=+0.076854582 container attach 7b3be3b7a6f844c2f1976f9dc05ff030438aaba39221cc09d3a2115458c349a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:05:45 compute-0 loving_blackwell[283358]: 167 167
Nov 25 10:05:45 compute-0 systemd[1]: libpod-7b3be3b7a6f844c2f1976f9dc05ff030438aaba39221cc09d3a2115458c349a5.scope: Deactivated successfully.
Nov 25 10:05:45 compute-0 conmon[283358]: conmon 7b3be3b7a6f844c2f197 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b3be3b7a6f844c2f1976f9dc05ff030438aaba39221cc09d3a2115458c349a5.scope/container/memory.events
Nov 25 10:05:45 compute-0 podman[283345]: 2025-11-25 10:05:45.840903552 +0000 UTC m=+0.078823371 container died 7b3be3b7a6f844c2f1976f9dc05ff030438aaba39221cc09d3a2115458c349a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_blackwell, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-717dcc0da5cc28b7f55183813046cfd55cb2440f74ba865fb842a9cc9835b1cc-merged.mount: Deactivated successfully.
Nov 25 10:05:45 compute-0 podman[283345]: 2025-11-25 10:05:45.855622345 +0000 UTC m=+0.093542163 container remove 7b3be3b7a6f844c2f1976f9dc05ff030438aaba39221cc09d3a2115458c349a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:05:45 compute-0 podman[283345]: 2025-11-25 10:05:45.7773001 +0000 UTC m=+0.015219939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:05:45 compute-0 systemd[1]: libpod-conmon-7b3be3b7a6f844c2f1976f9dc05ff030438aaba39221cc09d3a2115458c349a5.scope: Deactivated successfully.
Nov 25 10:05:45 compute-0 podman[283382]: 2025-11-25 10:05:45.975021521 +0000 UTC m=+0.028582988 container create 9a52698e43b3f865d49d01d44845d94b1c1f0fafa2bdb6f37c65dc378ceb9fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_noyce, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 10:05:46 compute-0 systemd[1]: Started libpod-conmon-9a52698e43b3f865d49d01d44845d94b1c1f0fafa2bdb6f37c65dc378ceb9fd8.scope.
Nov 25 10:05:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61a5eb046e7e44e8f7ebfa0e5155fe54fc0c97385d3b171aa7b3411e98285b64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61a5eb046e7e44e8f7ebfa0e5155fe54fc0c97385d3b171aa7b3411e98285b64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61a5eb046e7e44e8f7ebfa0e5155fe54fc0c97385d3b171aa7b3411e98285b64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61a5eb046e7e44e8f7ebfa0e5155fe54fc0c97385d3b171aa7b3411e98285b64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:05:46 compute-0 podman[283382]: 2025-11-25 10:05:46.035436847 +0000 UTC m=+0.088998335 container init 9a52698e43b3f865d49d01d44845d94b1c1f0fafa2bdb6f37c65dc378ceb9fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:05:46 compute-0 podman[283382]: 2025-11-25 10:05:46.044337271 +0000 UTC m=+0.097898739 container start 9a52698e43b3f865d49d01d44845d94b1c1f0fafa2bdb6f37c65dc378ceb9fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:05:46 compute-0 podman[283382]: 2025-11-25 10:05:46.046549999 +0000 UTC m=+0.100111488 container attach 9a52698e43b3f865d49d01d44845d94b1c1f0fafa2bdb6f37c65dc378ceb9fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_noyce, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 25 10:05:46 compute-0 podman[283382]: 2025-11-25 10:05:45.964756856 +0000 UTC m=+0.018318345 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:05:46 compute-0 nova_compute[253512]: 2025-11-25 10:05:46.239 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:46 compute-0 lvm[283473]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:05:46 compute-0 lvm[283473]: VG ceph_vg0 finished
Nov 25 10:05:46 compute-0 hopeful_noyce[283396]: {}
Nov 25 10:05:46 compute-0 systemd[1]: libpod-9a52698e43b3f865d49d01d44845d94b1c1f0fafa2bdb6f37c65dc378ceb9fd8.scope: Deactivated successfully.
Nov 25 10:05:46 compute-0 podman[283382]: 2025-11-25 10:05:46.519143539 +0000 UTC m=+0.572705007 container died 9a52698e43b3f865d49d01d44845d94b1c1f0fafa2bdb6f37c65dc378ceb9fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_noyce, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-61a5eb046e7e44e8f7ebfa0e5155fe54fc0c97385d3b171aa7b3411e98285b64-merged.mount: Deactivated successfully.
Nov 25 10:05:46 compute-0 podman[283382]: 2025-11-25 10:05:46.539537623 +0000 UTC m=+0.593099090 container remove 9a52698e43b3f865d49d01d44845d94b1c1f0fafa2bdb6f37c65dc378ceb9fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_noyce, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 10:05:46 compute-0 systemd[1]: libpod-conmon-9a52698e43b3f865d49d01d44845d94b1c1f0fafa2bdb6f37c65dc378ceb9fd8.scope: Deactivated successfully.
Nov 25 10:05:46 compute-0 sudo[283287]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:05:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:46 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:05:46 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:46 compute-0 ceph-mon[74207]: pgmap v1039: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 600 B/s rd, 0 op/s
Nov 25 10:05:46 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:46 compute-0 sudo[283484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:05:46 compute-0 sudo[283484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:05:46 compute-0 sudo[283484]: pam_unix(sudo:session): session closed for user root
Nov 25 10:05:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:46.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:46.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:47.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:47.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:47.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:47.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1040: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 600 B/s rd, 0 op/s
Nov 25 10:05:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:05:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:48 compute-0 ceph-mon[74207]: pgmap v1040: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 600 B/s rd, 0 op/s
Nov 25 10:05:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:48.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:48.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:48.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:48.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:48.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:48.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1041: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 600 B/s rd, 0 op/s
Nov 25 10:05:49 compute-0 nova_compute[253512]: 2025-11-25 10:05:49.883 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:05:50] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:05:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:05:50] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:05:50 compute-0 ceph-mon[74207]: pgmap v1041: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 600 B/s rd, 0 op/s
Nov 25 10:05:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:50.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:50.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:50 compute-0 podman[283513]: 2025-11-25 10:05:50.984403267 +0000 UTC m=+0.041908758 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 10:05:51 compute-0 nova_compute[253512]: 2025-11-25 10:05:51.241 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1042: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 901 B/s rd, 0 op/s
Nov 25 10:05:52 compute-0 ceph-mon[74207]: pgmap v1042: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 901 B/s rd, 0 op/s
Nov 25 10:05:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:52.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:52.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1043: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 901 B/s rd, 0 op/s
Nov 25 10:05:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 10:05:53 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1723778353' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:05:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 10:05:53 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1723778353' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:05:54 compute-0 ceph-mon[74207]: pgmap v1043: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 901 B/s rd, 0 op/s
Nov 25 10:05:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1723778353' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:05:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1723778353' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:05:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:54.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:54.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:54 compute-0 nova_compute[253512]: 2025-11-25 10:05:54.885 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:05:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1044: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:56 compute-0 nova_compute[253512]: 2025-11-25 10:05:56.244 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:56 compute-0 ceph-mon[74207]: pgmap v1044: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:56.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:05:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:56.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:05:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:57.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:57.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:57.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:57.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1045: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:05:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:05:58 compute-0 ceph-mon[74207]: pgmap v1045: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:05:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:58.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:05:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:05:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:58.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:05:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:58.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:58.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:58.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:05:58.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:05:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1046: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:05:59 compute-0 nova_compute[253512]: 2025-11-25 10:05:59.886 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:05:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:05:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:06:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:06:00] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:06:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:06:00] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:06:00 compute-0 ceph-mon[74207]: pgmap v1046: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:06:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:00.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:00.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:01 compute-0 nova_compute[253512]: 2025-11-25 10:06:01.246 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1047: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:06:02 compute-0 ceph-mon[74207]: pgmap v1047: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:06:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:02.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:02.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1048: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:03 compute-0 sudo[283543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:06:03 compute-0 sudo[283543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:03 compute-0 sudo[283543]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:04 compute-0 ceph-mon[74207]: pgmap v1048: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:04.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:04.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:04 compute-0 nova_compute[253512]: 2025-11-25 10:06:04.887 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:06:05.393 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:06:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:06:05.393 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:06:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:06:05.393 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:06:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1049: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:06 compute-0 nova_compute[253512]: 2025-11-25 10:06:06.248 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:06:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:06.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:06:06 compute-0 ceph-mon[74207]: pgmap v1049: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:06.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:06 compute-0 podman[283571]: 2025-11-25 10:06:06.972717112 +0000 UTC m=+0.036696802 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:06:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:07.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:07.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:07.102Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:07.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1050: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:06:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:08.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:06:08 compute-0 ceph-mon[74207]: pgmap v1050: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:08.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:08.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:08.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:08.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:08.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1051: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:09 compute-0 nova_compute[253512]: 2025-11-25 10:06:09.889 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:06:10] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:06:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:06:10] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:06:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:06:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:10.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:06:10 compute-0 ceph-mon[74207]: pgmap v1051: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:10.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:11 compute-0 nova_compute[253512]: 2025-11-25 10:06:11.249 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:12.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:12 compute-0 ceph-mon[74207]: pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:12.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:13 compute-0 podman[283595]: 2025-11-25 10:06:13.98726119 +0000 UTC m=+0.049093241 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 10:06:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:14.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:14 compute-0 ceph-mon[74207]: pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:14.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:14 compute-0 nova_compute[253512]: 2025-11-25 10:06:14.891 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:06:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:06:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:06:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:06:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:06:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:06:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:06:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:06:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:06:16 compute-0 nova_compute[253512]: 2025-11-25 10:06:16.252 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:16.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:16 compute-0 ceph-mon[74207]: pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:16.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:17.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:17.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:17.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:17.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:18.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:18 compute-0 ceph-mon[74207]: pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:18.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:18.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:18.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:18.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:18.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:19 compute-0 nova_compute[253512]: 2025-11-25 10:06:19.893 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:06:20] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:06:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:06:20] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:06:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:20.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:20 compute-0 ceph-mon[74207]: pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:20.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:21 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0[105427]: logger=infra.usagestats t=2025-11-25T10:06:21.19629416Z level=info msg="Usage stats are ready to report"
Nov 25 10:06:21 compute-0 nova_compute[253512]: 2025-11-25 10:06:21.253 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:21 compute-0 podman[283626]: 2025-11-25 10:06:21.978505004 +0000 UTC m=+0.039556128 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 25 10:06:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:22 compute-0 ceph-mon[74207]: pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:22.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:23 compute-0 sudo[283646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:06:23 compute-0 sudo[283646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:23 compute-0 sudo[283646]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:24.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:24 compute-0 ceph-mon[74207]: pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:24.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:24 compute-0 nova_compute[253512]: 2025-11-25 10:06:24.895 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:26 compute-0 nova_compute[253512]: 2025-11-25 10:06:26.255 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:26 compute-0 ceph-mon[74207]: pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:06:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:26.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:06:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:27.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:27.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:27.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:27.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1588309167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:28 compute-0 nova_compute[253512]: 2025-11-25 10:06:28.470 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:28 compute-0 nova_compute[253512]: 2025-11-25 10:06:28.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:28.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:28 compute-0 ceph-mon[74207]: pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2385987291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:28.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:28.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:28.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:28.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:28.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:29 compute-0 nova_compute[253512]: 2025-11-25 10:06:29.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:29 compute-0 nova_compute[253512]: 2025-11-25 10:06:29.492 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:06:29 compute-0 nova_compute[253512]: 2025-11-25 10:06:29.492 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:06:29 compute-0 nova_compute[253512]: 2025-11-25 10:06:29.492 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:06:29 compute-0 nova_compute[253512]: 2025-11-25 10:06:29.492 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:06:29 compute-0 nova_compute[253512]: 2025-11-25 10:06:29.492 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:06:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:29 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/407174291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:06:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2752908873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:29 compute-0 nova_compute[253512]: 2025-11-25 10:06:29.834 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:06:29 compute-0 nova_compute[253512]: 2025-11-25 10:06:29.895 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:06:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:06:30 compute-0 nova_compute[253512]: 2025-11-25 10:06:30.036 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:06:30 compute-0 nova_compute[253512]: 2025-11-25 10:06:30.037 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4571MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:06:30 compute-0 nova_compute[253512]: 2025-11-25 10:06:30.037 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:06:30 compute-0 nova_compute[253512]: 2025-11-25 10:06:30.038 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:06:30 compute-0 nova_compute[253512]: 2025-11-25 10:06:30.083 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:06:30 compute-0 nova_compute[253512]: 2025-11-25 10:06:30.083 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:06:30 compute-0 nova_compute[253512]: 2025-11-25 10:06:30.100 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:06:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:06:30] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:06:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:06:30] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:06:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:06:30 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3383596287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:30 compute-0 nova_compute[253512]: 2025-11-25 10:06:30.443 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:06:30 compute-0 nova_compute[253512]: 2025-11-25 10:06:30.446 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:06:30 compute-0 nova_compute[253512]: 2025-11-25 10:06:30.457 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:06:30 compute-0 nova_compute[253512]: 2025-11-25 10:06:30.459 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:06:30 compute-0 nova_compute[253512]: 2025-11-25 10:06:30.459 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:06:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:30.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:30 compute-0 ceph-mon[74207]: pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:30 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2752908873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:06:30 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2902791618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:30 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3383596287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:06:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:30.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:31 compute-0 nova_compute[253512]: 2025-11-25 10:06:31.257 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:31 compute-0 nova_compute[253512]: 2025-11-25 10:06:31.455 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:31 compute-0 nova_compute[253512]: 2025-11-25 10:06:31.455 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:32 compute-0 nova_compute[253512]: 2025-11-25 10:06:32.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:32.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:32 compute-0 ceph-mon[74207]: pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:32.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:34 compute-0 nova_compute[253512]: 2025-11-25 10:06:34.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:34 compute-0 nova_compute[253512]: 2025-11-25 10:06:34.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:06:34 compute-0 nova_compute[253512]: 2025-11-25 10:06:34.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:06:34 compute-0 nova_compute[253512]: 2025-11-25 10:06:34.489 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:06:34 compute-0 nova_compute[253512]: 2025-11-25 10:06:34.490 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:34 compute-0 nova_compute[253512]: 2025-11-25 10:06:34.490 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:06:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:34.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:34 compute-0 ceph-mon[74207]: pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:34.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:34 compute-0 nova_compute[253512]: 2025-11-25 10:06:34.898 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:36 compute-0 nova_compute[253512]: 2025-11-25 10:06:36.260 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:36.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:36 compute-0 ceph-mon[74207]: pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:36.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:37.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:37.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:37.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:37.099Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:37 compute-0 nova_compute[253512]: 2025-11-25 10:06:37.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:37 compute-0 podman[283729]: 2025-11-25 10:06:37.973392297 +0000 UTC m=+0.037351074 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:06:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:06:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:38.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:06:38 compute-0 ceph-mon[74207]: pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:38.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:38.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:38.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:38.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:38.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:39 compute-0 nova_compute[253512]: 2025-11-25 10:06:39.899 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:06:40] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:06:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:06:40] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:06:40 compute-0 nova_compute[253512]: 2025-11-25 10:06:40.467 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:06:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:40.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:40 compute-0 ceph-mon[74207]: pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:40.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:41 compute-0 nova_compute[253512]: 2025-11-25 10:06:41.262 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:42.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:42 compute-0 ceph-mon[74207]: pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:06:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:42.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:44 compute-0 sudo[283753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:06:44 compute-0 sudo[283753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:44 compute-0 sudo[283753]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:44 compute-0 podman[283777]: 2025-11-25 10:06:44.076852701 +0000 UTC m=+0.055100876 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 10:06:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:06:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:44.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:06:44 compute-0 ceph-mon[74207]: pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:44.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:44 compute-0 nova_compute[253512]: 2025-11-25 10:06:44.901 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_10:06:44
Nov 25 10:06:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:06:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 10:06:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['images', 'default.rgw.control', 'volumes', 'backups', 'vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.meta', '.rgw.root', 'default.rgw.log']
Nov 25 10:06:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 10:06:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:06:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:06:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:06:46 compute-0 nova_compute[253512]: 2025-11-25 10:06:46.264 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:06:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:46.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:06:46 compute-0 sudo[283803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:06:46 compute-0 sudo[283803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:46 compute-0 sudo[283803]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:46 compute-0 sudo[283828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:06:46 compute-0 sudo[283828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:46 compute-0 ceph-mon[74207]: pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:46.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:47.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:47.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:47.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:47.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:47 compute-0 sudo[283828]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 25 10:06:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 25 10:06:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 25 10:06:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:06:47 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 10:06:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 0 op/s
Nov 25 10:06:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 10:06:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:06:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 10:06:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:06:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 10:06:47 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 10:06:47 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:06:47 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:06:47 compute-0 sudo[283882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:06:47 compute-0 sudo[283882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:47 compute-0 sudo[283882]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:47 compute-0 sudo[283907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 10:06:47 compute-0 sudo[283907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:06:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:06:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:06:47 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:06:47 compute-0 podman[283964]: 2025-11-25 10:06:47.856727996 +0000 UTC m=+0.028103615 container create e2ea82bc02659fe337b78466fbe7a56334d6fb54206ea93ca6ab519a52af1cca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:06:47 compute-0 systemd[1]: Started libpod-conmon-e2ea82bc02659fe337b78466fbe7a56334d6fb54206ea93ca6ab519a52af1cca.scope.
Nov 25 10:06:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:06:47 compute-0 podman[283964]: 2025-11-25 10:06:47.903346044 +0000 UTC m=+0.074721663 container init e2ea82bc02659fe337b78466fbe7a56334d6fb54206ea93ca6ab519a52af1cca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_davinci, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:06:47 compute-0 podman[283964]: 2025-11-25 10:06:47.907721394 +0000 UTC m=+0.079097024 container start e2ea82bc02659fe337b78466fbe7a56334d6fb54206ea93ca6ab519a52af1cca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:06:47 compute-0 podman[283964]: 2025-11-25 10:06:47.908944378 +0000 UTC m=+0.080319999 container attach e2ea82bc02659fe337b78466fbe7a56334d6fb54206ea93ca6ab519a52af1cca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:06:47 compute-0 sleepy_davinci[283977]: 167 167
Nov 25 10:06:47 compute-0 systemd[1]: libpod-e2ea82bc02659fe337b78466fbe7a56334d6fb54206ea93ca6ab519a52af1cca.scope: Deactivated successfully.
Nov 25 10:06:47 compute-0 conmon[283977]: conmon e2ea82bc02659fe337b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2ea82bc02659fe337b78466fbe7a56334d6fb54206ea93ca6ab519a52af1cca.scope/container/memory.events
Nov 25 10:06:47 compute-0 podman[283964]: 2025-11-25 10:06:47.912170756 +0000 UTC m=+0.083546376 container died e2ea82bc02659fe337b78466fbe7a56334d6fb54206ea93ca6ab519a52af1cca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_davinci, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c8e2ebd3588c115807a47eafc7f5cd3e6611f7c75397592459a902ccb8be9e7-merged.mount: Deactivated successfully.
Nov 25 10:06:47 compute-0 podman[283964]: 2025-11-25 10:06:47.931201672 +0000 UTC m=+0.102577292 container remove e2ea82bc02659fe337b78466fbe7a56334d6fb54206ea93ca6ab519a52af1cca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_davinci, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 25 10:06:47 compute-0 podman[283964]: 2025-11-25 10:06:47.844527565 +0000 UTC m=+0.015903206 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:06:47 compute-0 systemd[1]: libpod-conmon-e2ea82bc02659fe337b78466fbe7a56334d6fb54206ea93ca6ab519a52af1cca.scope: Deactivated successfully.
Nov 25 10:06:48 compute-0 podman[283999]: 2025-11-25 10:06:48.053626616 +0000 UTC m=+0.031451601 container create 0ac9adf2a3bcb9d86742cc942e86decdf143ffa29611e17e90079384bd448b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nightingale, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:06:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:48 compute-0 systemd[1]: Started libpod-conmon-0ac9adf2a3bcb9d86742cc942e86decdf143ffa29611e17e90079384bd448b6f.scope.
Nov 25 10:06:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f802f8097a52c28c32cbcb01d9e4133a4eb4bb9b686caa0a5660682221b5e7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f802f8097a52c28c32cbcb01d9e4133a4eb4bb9b686caa0a5660682221b5e7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f802f8097a52c28c32cbcb01d9e4133a4eb4bb9b686caa0a5660682221b5e7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f802f8097a52c28c32cbcb01d9e4133a4eb4bb9b686caa0a5660682221b5e7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f802f8097a52c28c32cbcb01d9e4133a4eb4bb9b686caa0a5660682221b5e7a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:48 compute-0 podman[283999]: 2025-11-25 10:06:48.114166227 +0000 UTC m=+0.091991232 container init 0ac9adf2a3bcb9d86742cc942e86decdf143ffa29611e17e90079384bd448b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:06:48 compute-0 podman[283999]: 2025-11-25 10:06:48.118875187 +0000 UTC m=+0.096700172 container start 0ac9adf2a3bcb9d86742cc942e86decdf143ffa29611e17e90079384bd448b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nightingale, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:06:48 compute-0 podman[283999]: 2025-11-25 10:06:48.119945984 +0000 UTC m=+0.097770969 container attach 0ac9adf2a3bcb9d86742cc942e86decdf143ffa29611e17e90079384bd448b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nightingale, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 10:06:48 compute-0 podman[283999]: 2025-11-25 10:06:48.040282662 +0000 UTC m=+0.018107657 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:06:48 compute-0 fervent_nightingale[284013]: --> passed data devices: 0 physical, 1 LVM
Nov 25 10:06:48 compute-0 fervent_nightingale[284013]: --> All data devices are unavailable
Nov 25 10:06:48 compute-0 systemd[1]: libpod-0ac9adf2a3bcb9d86742cc942e86decdf143ffa29611e17e90079384bd448b6f.scope: Deactivated successfully.
Nov 25 10:06:48 compute-0 podman[284028]: 2025-11-25 10:06:48.400055908 +0000 UTC m=+0.015262689 container died 0ac9adf2a3bcb9d86742cc942e86decdf143ffa29611e17e90079384bd448b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 10:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f802f8097a52c28c32cbcb01d9e4133a4eb4bb9b686caa0a5660682221b5e7a-merged.mount: Deactivated successfully.
Nov 25 10:06:48 compute-0 podman[284028]: 2025-11-25 10:06:48.419846956 +0000 UTC m=+0.035053737 container remove 0ac9adf2a3bcb9d86742cc942e86decdf143ffa29611e17e90079384bd448b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_nightingale, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:06:48 compute-0 systemd[1]: libpod-conmon-0ac9adf2a3bcb9d86742cc942e86decdf143ffa29611e17e90079384bd448b6f.scope: Deactivated successfully.
Nov 25 10:06:48 compute-0 sudo[283907]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:48 compute-0 sudo[284040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:06:48 compute-0 sudo[284040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:48 compute-0 sudo[284040]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:48 compute-0 sudo[284065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 10:06:48 compute-0 sudo[284065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 10:06:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:48.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 10:06:48 compute-0 podman[284121]: 2025-11-25 10:06:48.818670997 +0000 UTC m=+0.026750347 container create 38acb45fa65e09d0ef858361fc730560026ceeffd4a4bb2f26680725a9786cf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bardeen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 10:06:48 compute-0 systemd[1]: Started libpod-conmon-38acb45fa65e09d0ef858361fc730560026ceeffd4a4bb2f26680725a9786cf4.scope.
Nov 25 10:06:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:48.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:48 compute-0 ceph-mon[74207]: pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 0 op/s
Nov 25 10:06:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:06:48 compute-0 podman[284121]: 2025-11-25 10:06:48.866565298 +0000 UTC m=+0.074644669 container init 38acb45fa65e09d0ef858361fc730560026ceeffd4a4bb2f26680725a9786cf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bardeen, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 10:06:48 compute-0 podman[284121]: 2025-11-25 10:06:48.87169009 +0000 UTC m=+0.079769442 container start 38acb45fa65e09d0ef858361fc730560026ceeffd4a4bb2f26680725a9786cf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:06:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:48.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:48 compute-0 podman[284121]: 2025-11-25 10:06:48.873619155 +0000 UTC m=+0.081698526 container attach 38acb45fa65e09d0ef858361fc730560026ceeffd4a4bb2f26680725a9786cf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bardeen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:06:48 compute-0 youthful_bardeen[284134]: 167 167
Nov 25 10:06:48 compute-0 systemd[1]: libpod-38acb45fa65e09d0ef858361fc730560026ceeffd4a4bb2f26680725a9786cf4.scope: Deactivated successfully.
Nov 25 10:06:48 compute-0 conmon[284134]: conmon 38acb45fa65e09d0ef85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38acb45fa65e09d0ef858361fc730560026ceeffd4a4bb2f26680725a9786cf4.scope/container/memory.events
Nov 25 10:06:48 compute-0 podman[284121]: 2025-11-25 10:06:48.875808548 +0000 UTC m=+0.083887899 container died 38acb45fa65e09d0ef858361fc730560026ceeffd4a4bb2f26680725a9786cf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:06:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:48.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:48.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:48.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6be128e23f845dc447cee1d329b9949b382d52287772bd78950854a8fd5b5e0-merged.mount: Deactivated successfully.
Nov 25 10:06:48 compute-0 podman[284121]: 2025-11-25 10:06:48.898906505 +0000 UTC m=+0.106985856 container remove 38acb45fa65e09d0ef858361fc730560026ceeffd4a4bb2f26680725a9786cf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bardeen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:06:48 compute-0 podman[284121]: 2025-11-25 10:06:48.807688139 +0000 UTC m=+0.015767500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:06:48 compute-0 systemd[1]: libpod-conmon-38acb45fa65e09d0ef858361fc730560026ceeffd4a4bb2f26680725a9786cf4.scope: Deactivated successfully.
Nov 25 10:06:49 compute-0 podman[284157]: 2025-11-25 10:06:49.019333286 +0000 UTC m=+0.028611263 container create 4b72c904d85a435db679d8858fe191695fc06308fdfa8bc9ec82a2033d2688b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 10:06:49 compute-0 systemd[1]: Started libpod-conmon-4b72c904d85a435db679d8858fe191695fc06308fdfa8bc9ec82a2033d2688b5.scope.
Nov 25 10:06:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15b7fbeeef460c2b833f24a637c640a084570c5739edb607e17cf8cefadddaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15b7fbeeef460c2b833f24a637c640a084570c5739edb607e17cf8cefadddaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15b7fbeeef460c2b833f24a637c640a084570c5739edb607e17cf8cefadddaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15b7fbeeef460c2b833f24a637c640a084570c5739edb607e17cf8cefadddaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:49 compute-0 podman[284157]: 2025-11-25 10:06:49.075421332 +0000 UTC m=+0.084699309 container init 4b72c904d85a435db679d8858fe191695fc06308fdfa8bc9ec82a2033d2688b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:06:49 compute-0 podman[284157]: 2025-11-25 10:06:49.081785789 +0000 UTC m=+0.091063765 container start 4b72c904d85a435db679d8858fe191695fc06308fdfa8bc9ec82a2033d2688b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 25 10:06:49 compute-0 podman[284157]: 2025-11-25 10:06:49.082987944 +0000 UTC m=+0.092265921 container attach 4b72c904d85a435db679d8858fe191695fc06308fdfa8bc9ec82a2033d2688b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_payne, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:06:49 compute-0 podman[284157]: 2025-11-25 10:06:49.007877098 +0000 UTC m=+0.017155074 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:06:49 compute-0 peaceful_payne[284170]: {
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:     "1": [
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:         {
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             "devices": [
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "/dev/loop3"
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             ],
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             "lv_name": "ceph_lv0",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             "lv_size": "21470642176",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             "name": "ceph_lv0",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             "tags": {
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.cluster_name": "ceph",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.crush_device_class": "",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.encrypted": "0",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.osd_id": "1",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.type": "block",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.vdo": "0",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:                 "ceph.with_tpm": "0"
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             },
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             "type": "block",
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:             "vg_name": "ceph_vg0"
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:         }
Nov 25 10:06:49 compute-0 peaceful_payne[284170]:     ]
Nov 25 10:06:49 compute-0 peaceful_payne[284170]: }
Nov 25 10:06:49 compute-0 systemd[1]: libpod-4b72c904d85a435db679d8858fe191695fc06308fdfa8bc9ec82a2033d2688b5.scope: Deactivated successfully.
Nov 25 10:06:49 compute-0 conmon[284170]: conmon 4b72c904d85a435db679 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b72c904d85a435db679d8858fe191695fc06308fdfa8bc9ec82a2033d2688b5.scope/container/memory.events
Nov 25 10:06:49 compute-0 podman[284157]: 2025-11-25 10:06:49.315421587 +0000 UTC m=+0.324699564 container died 4b72c904d85a435db679d8858fe191695fc06308fdfa8bc9ec82a2033d2688b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:06:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Nov 25 10:06:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a15b7fbeeef460c2b833f24a637c640a084570c5739edb607e17cf8cefadddaf-merged.mount: Deactivated successfully.
Nov 25 10:06:49 compute-0 podman[284157]: 2025-11-25 10:06:49.337727572 +0000 UTC m=+0.347005548 container remove 4b72c904d85a435db679d8858fe191695fc06308fdfa8bc9ec82a2033d2688b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 25 10:06:49 compute-0 systemd[1]: libpod-conmon-4b72c904d85a435db679d8858fe191695fc06308fdfa8bc9ec82a2033d2688b5.scope: Deactivated successfully.
Nov 25 10:06:49 compute-0 sudo[284065]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:49 compute-0 sudo[284189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:06:49 compute-0 sudo[284189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:49 compute-0 sudo[284189]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:49 compute-0 sudo[284214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 10:06:49 compute-0 sudo[284214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:49 compute-0 podman[284271]: 2025-11-25 10:06:49.72706803 +0000 UTC m=+0.028760474 container create 0a1b216ca2847b68bd0c6051bf12b7c30aa89a55941a3fa98fc9c7140bf82996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_khorana, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 10:06:49 compute-0 systemd[1]: Started libpod-conmon-0a1b216ca2847b68bd0c6051bf12b7c30aa89a55941a3fa98fc9c7140bf82996.scope.
Nov 25 10:06:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:06:49 compute-0 podman[284271]: 2025-11-25 10:06:49.776762352 +0000 UTC m=+0.078454806 container init 0a1b216ca2847b68bd0c6051bf12b7c30aa89a55941a3fa98fc9c7140bf82996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_khorana, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:06:49 compute-0 podman[284271]: 2025-11-25 10:06:49.782234228 +0000 UTC m=+0.083926682 container start 0a1b216ca2847b68bd0c6051bf12b7c30aa89a55941a3fa98fc9c7140bf82996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_khorana, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 10:06:49 compute-0 podman[284271]: 2025-11-25 10:06:49.783399793 +0000 UTC m=+0.085092238 container attach 0a1b216ca2847b68bd0c6051bf12b7c30aa89a55941a3fa98fc9c7140bf82996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_khorana, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:06:49 compute-0 sad_khorana[284285]: 167 167
Nov 25 10:06:49 compute-0 systemd[1]: libpod-0a1b216ca2847b68bd0c6051bf12b7c30aa89a55941a3fa98fc9c7140bf82996.scope: Deactivated successfully.
Nov 25 10:06:49 compute-0 podman[284271]: 2025-11-25 10:06:49.786418991 +0000 UTC m=+0.088111435 container died 0a1b216ca2847b68bd0c6051bf12b7c30aa89a55941a3fa98fc9c7140bf82996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:06:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0998bb50aeee5cf29268bfdd052b8e71e2ad72ed6a9eff7c7430a43c28a09f0c-merged.mount: Deactivated successfully.
Nov 25 10:06:49 compute-0 podman[284271]: 2025-11-25 10:06:49.804238746 +0000 UTC m=+0.105931190 container remove 0a1b216ca2847b68bd0c6051bf12b7c30aa89a55941a3fa98fc9c7140bf82996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_khorana, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:06:49 compute-0 podman[284271]: 2025-11-25 10:06:49.716411687 +0000 UTC m=+0.018104131 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:06:49 compute-0 systemd[1]: libpod-conmon-0a1b216ca2847b68bd0c6051bf12b7c30aa89a55941a3fa98fc9c7140bf82996.scope: Deactivated successfully.
Nov 25 10:06:49 compute-0 nova_compute[253512]: 2025-11-25 10:06:49.901 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:49 compute-0 podman[284307]: 2025-11-25 10:06:49.931943005 +0000 UTC m=+0.034855935 container create 004fc13263452756253a5373b2f2588530a40fde9343fd79d9d6bebb2ab7db64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:06:49 compute-0 systemd[1]: Started libpod-conmon-004fc13263452756253a5373b2f2588530a40fde9343fd79d9d6bebb2ab7db64.scope.
Nov 25 10:06:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e34ad9101db46e097c036ab74253b300820531ed03d120c0a09d19962c8d7df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e34ad9101db46e097c036ab74253b300820531ed03d120c0a09d19962c8d7df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e34ad9101db46e097c036ab74253b300820531ed03d120c0a09d19962c8d7df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e34ad9101db46e097c036ab74253b300820531ed03d120c0a09d19962c8d7df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:06:49 compute-0 podman[284307]: 2025-11-25 10:06:49.993652529 +0000 UTC m=+0.096565459 container init 004fc13263452756253a5373b2f2588530a40fde9343fd79d9d6bebb2ab7db64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:06:49 compute-0 podman[284307]: 2025-11-25 10:06:49.999237217 +0000 UTC m=+0.102150147 container start 004fc13263452756253a5373b2f2588530a40fde9343fd79d9d6bebb2ab7db64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilson, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 10:06:50 compute-0 podman[284307]: 2025-11-25 10:06:50.002802062 +0000 UTC m=+0.105715002 container attach 004fc13263452756253a5373b2f2588530a40fde9343fd79d9d6bebb2ab7db64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilson, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 10:06:50 compute-0 podman[284307]: 2025-11-25 10:06:49.916117436 +0000 UTC m=+0.019030386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:06:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:06:50] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:06:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:06:50] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:06:50 compute-0 lvm[284398]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:06:50 compute-0 lvm[284398]: VG ceph_vg0 finished
Nov 25 10:06:50 compute-0 great_wilson[284321]: {}
Nov 25 10:06:50 compute-0 systemd[1]: libpod-004fc13263452756253a5373b2f2588530a40fde9343fd79d9d6bebb2ab7db64.scope: Deactivated successfully.
Nov 25 10:06:50 compute-0 podman[284307]: 2025-11-25 10:06:50.506987787 +0000 UTC m=+0.609900717 container died 004fc13263452756253a5373b2f2588530a40fde9343fd79d9d6bebb2ab7db64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 10:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e34ad9101db46e097c036ab74253b300820531ed03d120c0a09d19962c8d7df-merged.mount: Deactivated successfully.
Nov 25 10:06:50 compute-0 podman[284307]: 2025-11-25 10:06:50.530617717 +0000 UTC m=+0.633530646 container remove 004fc13263452756253a5373b2f2588530a40fde9343fd79d9d6bebb2ab7db64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 10:06:50 compute-0 systemd[1]: libpod-conmon-004fc13263452756253a5373b2f2588530a40fde9343fd79d9d6bebb2ab7db64.scope: Deactivated successfully.
Nov 25 10:06:50 compute-0 sudo[284214]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:06:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:06:50 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:06:50 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:06:50 compute-0 sudo[284409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:06:50 compute-0 sudo[284409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:06:50 compute-0 sudo[284409]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:50.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:50.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:50 compute-0 ceph-mon[74207]: pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Nov 25 10:06:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:06:50 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:06:51 compute-0 nova_compute[253512]: 2025-11-25 10:06:51.266 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 0 op/s
Nov 25 10:06:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:52.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:52.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:52 compute-0 ceph-mon[74207]: pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 0 op/s
Nov 25 10:06:52 compute-0 podman[284436]: 2025-11-25 10:06:52.97533177 +0000 UTC m=+0.039177574 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:06:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:53 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Nov 25 10:06:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:54.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:54.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:54 compute-0 ceph-mon[74207]: pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Nov 25 10:06:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/353881434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:06:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/353881434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:06:54 compute-0 nova_compute[253512]: 2025-11-25 10:06:54.903 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:06:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:06:56 compute-0 nova_compute[253512]: 2025-11-25 10:06:56.267 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:56.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:06:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:56.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:06:56 compute-0 ceph-mon[74207]: pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Nov 25 10:06:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:57.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:57.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:57.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:57.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:57 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 0 op/s
Nov 25 10:06:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:06:58 compute-0 nova_compute[253512]: 2025-11-25 10:06:58.334 253516 DEBUG oslo_concurrency.processutils [None req-dbea7ddb-eb32-4e63-954a-7e69465c4db7 331b917bd3774be79aebd5ee1af3b1fa f414368112e54eacbcaf4af631b3b667 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:06:58 compute-0 nova_compute[253512]: 2025-11-25 10:06:58.358 253516 DEBUG oslo_concurrency.processutils [None req-dbea7ddb-eb32-4e63-954a-7e69465c4db7 331b917bd3774be79aebd5ee1af3b1fa f414368112e54eacbcaf4af631b3b667 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:06:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:58.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:06:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:06:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:58.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:06:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:58.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:58.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:58.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:06:58.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:06:58 compute-0 ceph-mon[74207]: pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 0 op/s
Nov 25 10:06:59 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:06:59 compute-0 nova_compute[253512]: 2025-11-25 10:06:59.903 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:06:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:06:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:07:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:07:00] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:07:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:07:00] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:07:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:00.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:00.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:00 compute-0 ceph-mon[74207]: pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:07:01 compute-0 nova_compute[253512]: 2025-11-25 10:07:01.269 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:01 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:02.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:02.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:02 compute-0 ceph-mon[74207]: pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:03 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:04 compute-0 sudo[284466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:07:04 compute-0 sudo[284466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:04 compute-0 sudo[284466]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:04 compute-0 nova_compute[253512]: 2025-11-25 10:07:04.424 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:04 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:07:04.425 164791 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:07:04 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:07:04.425 164791 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:07:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:07:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:04.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:07:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:04.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:04 compute-0 nova_compute[253512]: 2025-11-25 10:07:04.905 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:04 compute-0 ceph-mon[74207]: pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:05 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:07:05.393 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:07:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:07:05.393 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:07:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:07:05.394 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:07:06 compute-0 nova_compute[253512]: 2025-11-25 10:07:06.271 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:06.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:06.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:06 compute-0 ceph-mon[74207]: pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:07.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:07.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:07.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:07.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:07 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:08.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:08.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:08.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:08.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:08.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:08.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:08 compute-0 ceph-mon[74207]: pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:08 compute-0 podman[284495]: 2025-11-25 10:07:08.970582526 +0000 UTC m=+0.031534506 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 10:07:09 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:09 compute-0 nova_compute[253512]: 2025-11-25 10:07:09.906 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:07:10] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:07:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:07:10] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:07:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:10.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:10.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:10 compute-0 ceph-mon[74207]: pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:11 compute-0 nova_compute[253512]: 2025-11-25 10:07:11.273 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:11 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:11 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:07:11.427 164791 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a23dd616-1012-4f28-8d7d-927fdaae5f69, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:07:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:12.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:12 compute-0 ceph-mon[74207]: pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:13 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:14.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:14.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:14 compute-0 nova_compute[253512]: 2025-11-25 10:07:14.907 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:14 compute-0 ceph-mon[74207]: pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:07:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:07:14 compute-0 podman[284517]: 2025-11-25 10:07:14.989702941 +0000 UTC m=+0.055091476 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 10:07:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:07:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:07:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:07:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:07:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:07:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:07:15 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:07:16 compute-0 nova_compute[253512]: 2025-11-25 10:07:16.275 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:16.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:16.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:16 compute-0 ceph-mon[74207]: pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:17.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:17.302Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:17.303Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:17.314Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:17 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:18.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:18.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:18.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:18.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:18.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:18.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:18 compute-0 ceph-mon[74207]: pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:19 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:19 compute-0 nova_compute[253512]: 2025-11-25 10:07:19.908 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:07:20] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:07:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:07:20] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:07:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:20.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:20.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:20 compute-0 ceph-mon[74207]: pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:21 compute-0 nova_compute[253512]: 2025-11-25 10:07:21.277 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:21 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:22.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:22.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:22 compute-0 ceph-mon[74207]: pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:23 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:23 compute-0 podman[284549]: 2025-11-25 10:07:23.975412502 +0000 UTC m=+0.040124126 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:07:24 compute-0 ceph-mon[74207]: pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:24 compute-0 sudo[284567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:07:24 compute-0 sudo[284567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:24 compute-0 sudo[284567]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:24.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:24.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:24 compute-0 nova_compute[253512]: 2025-11-25 10:07:24.910 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:25 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:26 compute-0 nova_compute[253512]: 2025-11-25 10:07:26.279 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:26 compute-0 ceph-mon[74207]: pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:26.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:26.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:27.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:27.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:27.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:27.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:27 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3596999287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:28 compute-0 ceph-mon[74207]: pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2678034550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:28.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:28.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:28.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:28.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:28.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:28.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:29 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:29 compute-0 nova_compute[253512]: 2025-11-25 10:07:29.911 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:07:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:07:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:07:30] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:07:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:07:30] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:07:30 compute-0 ceph-mon[74207]: pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:07:30 compute-0 nova_compute[253512]: 2025-11-25 10:07:30.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:30 compute-0 nova_compute[253512]: 2025-11-25 10:07:30.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:30 compute-0 nova_compute[253512]: 2025-11-25 10:07:30.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:30 compute-0 nova_compute[253512]: 2025-11-25 10:07:30.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:30 compute-0 nova_compute[253512]: 2025-11-25 10:07:30.494 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:07:30 compute-0 nova_compute[253512]: 2025-11-25 10:07:30.494 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:07:30 compute-0 nova_compute[253512]: 2025-11-25 10:07:30.494 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:07:30 compute-0 nova_compute[253512]: 2025-11-25 10:07:30.494 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:07:30 compute-0 nova_compute[253512]: 2025-11-25 10:07:30.494 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:07:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:30.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:07:30 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4223156061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:30 compute-0 nova_compute[253512]: 2025-11-25 10:07:30.827 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:07:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.010 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.011 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4565MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.011 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.011 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.064 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.065 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.078 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.280 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:31 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:07:31 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1450865442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.415 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.419 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:07:31 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4223156061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:31 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1450865442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.429 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.430 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:07:31 compute-0 nova_compute[253512]: 2025-11-25 10:07:31.430 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:07:32 compute-0 ceph-mon[74207]: pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:32 compute-0 nova_compute[253512]: 2025-11-25 10:07:32.426 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:32 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3449064442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:32 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2299839423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:07:32 compute-0 nova_compute[253512]: 2025-11-25 10:07:32.470 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:32.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:32.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:33 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:34 compute-0 ceph-mon[74207]: pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:34.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:34.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:34 compute-0 nova_compute[253512]: 2025-11-25 10:07:34.913 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:35 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:35 compute-0 nova_compute[253512]: 2025-11-25 10:07:35.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:35 compute-0 nova_compute[253512]: 2025-11-25 10:07:35.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:07:35 compute-0 nova_compute[253512]: 2025-11-25 10:07:35.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:07:35 compute-0 nova_compute[253512]: 2025-11-25 10:07:35.483 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:07:35 compute-0 nova_compute[253512]: 2025-11-25 10:07:35.483 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:35 compute-0 nova_compute[253512]: 2025-11-25 10:07:35.483 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:07:36 compute-0 nova_compute[253512]: 2025-11-25 10:07:36.282 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:36 compute-0 ceph-mon[74207]: pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:36.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:36.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:37.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:37.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:37.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:37.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:37 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:38 compute-0 ceph-mon[74207]: pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:38.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:38.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:38.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:38.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:38.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:38.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:39 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:39 compute-0 nova_compute[253512]: 2025-11-25 10:07:39.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:07:39 compute-0 nova_compute[253512]: 2025-11-25 10:07:39.916 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:39 compute-0 podman[284651]: 2025-11-25 10:07:39.974459661 +0000 UTC m=+0.038153933 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:07:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:07:40] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:07:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:07:40] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:07:40 compute-0 ceph-mon[74207]: pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:40.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:40.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:41 compute-0 nova_compute[253512]: 2025-11-25 10:07:41.283 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:41 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:42 compute-0 ceph-mon[74207]: pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:42.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:42.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:43 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:44 compute-0 sudo[284673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:07:44 compute-0 sudo[284673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:44 compute-0 sudo[284673]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:44 compute-0 ceph-mon[74207]: pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:44.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:44.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:44 compute-0 nova_compute[253512]: 2025-11-25 10:07:44.917 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_10:07:44
Nov 25 10:07:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:07:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 10:07:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['.nfs', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'images', '.rgw.root', 'volumes', 'vms', 'backups']
Nov 25 10:07:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 10:07:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:07:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:07:45 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:07:45 compute-0 podman[284699]: 2025-11-25 10:07:45.994528574 +0000 UTC m=+0.058819854 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:07:46 compute-0 nova_compute[253512]: 2025-11-25 10:07:46.284 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:46 compute-0 ceph-mon[74207]: pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:46.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:46.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:47.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:47.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:47.118Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:47.118Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:47 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:48 compute-0 ceph-mon[74207]: pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:48.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:48.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 25 10:07:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:48.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:48.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:48.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:48.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:49 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:49 compute-0 nova_compute[253512]: 2025-11-25 10:07:49.919 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:07:50] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Nov 25 10:07:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:07:50] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Nov 25 10:07:50 compute-0 ceph-mon[74207]: pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:07:50 compute-0 sudo[284727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:07:50 compute-0 sudo[284727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:50 compute-0 sudo[284727]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:50.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:50 compute-0 sudo[284752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:07:50 compute-0 sudo[284752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:50.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:51 compute-0 sudo[284752]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:51 compute-0 nova_compute[253512]: 2025-11-25 10:07:51.286 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:51 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 25 10:07:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 25 10:07:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:52 compute-0 ceph-mon[74207]: pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:07:52 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:52 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:52.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:07:52 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:07:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 10:07:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:07:52 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 536 B/s rd, 0 op/s
Nov 25 10:07:52 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 649 B/s rd, 0 op/s
Nov 25 10:07:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 10:07:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 10:07:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 10:07:52 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:07:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 10:07:52 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:07:52 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:07:52 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:07:52 compute-0 sudo[284808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:07:52 compute-0 sudo[284808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:52 compute-0 sudo[284808]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:52.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:52 compute-0 sudo[284833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 10:07:52 compute-0 sudo[284833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:53 compute-0 podman[284890]: 2025-11-25 10:07:53.257215846 +0000 UTC m=+0.034379158 container create b4319387b823200ac2a3d47d6c944949bd31f92845797ac220c49a1b2f081597 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:07:53 compute-0 systemd[1]: Started libpod-conmon-b4319387b823200ac2a3d47d6c944949bd31f92845797ac220c49a1b2f081597.scope.
Nov 25 10:07:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:07:53 compute-0 podman[284890]: 2025-11-25 10:07:53.317220731 +0000 UTC m=+0.094384063 container init b4319387b823200ac2a3d47d6c944949bd31f92845797ac220c49a1b2f081597 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_archimedes, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:07:53 compute-0 podman[284890]: 2025-11-25 10:07:53.322358455 +0000 UTC m=+0.099521766 container start b4319387b823200ac2a3d47d6c944949bd31f92845797ac220c49a1b2f081597 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_archimedes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 10:07:53 compute-0 podman[284890]: 2025-11-25 10:07:53.323621124 +0000 UTC m=+0.100784446 container attach b4319387b823200ac2a3d47d6c944949bd31f92845797ac220c49a1b2f081597 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_archimedes, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Nov 25 10:07:53 compute-0 laughing_archimedes[284903]: 167 167
Nov 25 10:07:53 compute-0 systemd[1]: libpod-b4319387b823200ac2a3d47d6c944949bd31f92845797ac220c49a1b2f081597.scope: Deactivated successfully.
Nov 25 10:07:53 compute-0 podman[284890]: 2025-11-25 10:07:53.327086907 +0000 UTC m=+0.104250238 container died b4319387b823200ac2a3d47d6c944949bd31f92845797ac220c49a1b2f081597 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_archimedes, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:07:53 compute-0 podman[284890]: 2025-11-25 10:07:53.244885184 +0000 UTC m=+0.022048517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-344caded973f59239cc7a778882671b2781e1b1546ad9c6b8f5d4bb34e92ef56-merged.mount: Deactivated successfully.
Nov 25 10:07:53 compute-0 podman[284890]: 2025-11-25 10:07:53.345256602 +0000 UTC m=+0.122419914 container remove b4319387b823200ac2a3d47d6c944949bd31f92845797ac220c49a1b2f081597 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 10:07:53 compute-0 systemd[1]: libpod-conmon-b4319387b823200ac2a3d47d6c944949bd31f92845797ac220c49a1b2f081597.scope: Deactivated successfully.
Nov 25 10:07:53 compute-0 podman[284926]: 2025-11-25 10:07:53.480625292 +0000 UTC m=+0.032017286 container create aacf27180b7cc4a5630b0c0da79034a98718a9a2bc1a7507335c0ca3aaee65bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_visvesvaraya, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:07:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:07:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:07:53 compute-0 ceph-mon[74207]: pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 536 B/s rd, 0 op/s
Nov 25 10:07:53 compute-0 ceph-mon[74207]: pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 649 B/s rd, 0 op/s
Nov 25 10:07:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:07:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:07:53 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:07:53 compute-0 systemd[1]: Started libpod-conmon-aacf27180b7cc4a5630b0c0da79034a98718a9a2bc1a7507335c0ca3aaee65bc.scope.
Nov 25 10:07:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f1176b9f3db3a9a1769319c1e2df03518793332be1bd14777f26ce925ed517/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f1176b9f3db3a9a1769319c1e2df03518793332be1bd14777f26ce925ed517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f1176b9f3db3a9a1769319c1e2df03518793332be1bd14777f26ce925ed517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f1176b9f3db3a9a1769319c1e2df03518793332be1bd14777f26ce925ed517/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f1176b9f3db3a9a1769319c1e2df03518793332be1bd14777f26ce925ed517/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:53 compute-0 podman[284926]: 2025-11-25 10:07:53.545110021 +0000 UTC m=+0.096502025 container init aacf27180b7cc4a5630b0c0da79034a98718a9a2bc1a7507335c0ca3aaee65bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_visvesvaraya, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:07:53 compute-0 podman[284926]: 2025-11-25 10:07:53.552419498 +0000 UTC m=+0.103811492 container start aacf27180b7cc4a5630b0c0da79034a98718a9a2bc1a7507335c0ca3aaee65bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:07:53 compute-0 podman[284926]: 2025-11-25 10:07:53.553978716 +0000 UTC m=+0.105370711 container attach aacf27180b7cc4a5630b0c0da79034a98718a9a2bc1a7507335c0ca3aaee65bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_visvesvaraya, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:07:53 compute-0 podman[284926]: 2025-11-25 10:07:53.467489564 +0000 UTC m=+0.018881578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:07:53 compute-0 distracted_visvesvaraya[284941]: --> passed data devices: 0 physical, 1 LVM
Nov 25 10:07:53 compute-0 distracted_visvesvaraya[284941]: --> All data devices are unavailable
Nov 25 10:07:53 compute-0 systemd[1]: libpod-aacf27180b7cc4a5630b0c0da79034a98718a9a2bc1a7507335c0ca3aaee65bc.scope: Deactivated successfully.
Nov 25 10:07:53 compute-0 conmon[284941]: conmon aacf27180b7cc4a5630b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aacf27180b7cc4a5630b0c0da79034a98718a9a2bc1a7507335c0ca3aaee65bc.scope/container/memory.events
Nov 25 10:07:53 compute-0 podman[284926]: 2025-11-25 10:07:53.816867333 +0000 UTC m=+0.368259327 container died aacf27180b7cc4a5630b0c0da79034a98718a9a2bc1a7507335c0ca3aaee65bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 10:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5f1176b9f3db3a9a1769319c1e2df03518793332be1bd14777f26ce925ed517-merged.mount: Deactivated successfully.
Nov 25 10:07:53 compute-0 podman[284926]: 2025-11-25 10:07:53.842101043 +0000 UTC m=+0.393493027 container remove aacf27180b7cc4a5630b0c0da79034a98718a9a2bc1a7507335c0ca3aaee65bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:07:53 compute-0 systemd[1]: libpod-conmon-aacf27180b7cc4a5630b0c0da79034a98718a9a2bc1a7507335c0ca3aaee65bc.scope: Deactivated successfully.
Nov 25 10:07:53 compute-0 sudo[284833]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:53 compute-0 sudo[284966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:07:53 compute-0 sudo[284966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:53 compute-0 sudo[284966]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 10:07:53 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1289407423' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:07:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 10:07:53 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1289407423' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:07:53 compute-0 sudo[284992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 10:07:53 compute-0 sudo[284992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:54 compute-0 podman[285048]: 2025-11-25 10:07:54.295844659 +0000 UTC m=+0.030721785 container create cbac237e90b3e412b8d86305f9ef6c9e2b6fae9dacdf63ccc6523136189af37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Nov 25 10:07:54 compute-0 systemd[1]: Started libpod-conmon-cbac237e90b3e412b8d86305f9ef6c9e2b6fae9dacdf63ccc6523136189af37d.scope.
Nov 25 10:07:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:07:54 compute-0 podman[285048]: 2025-11-25 10:07:54.346476248 +0000 UTC m=+0.081353375 container init cbac237e90b3e412b8d86305f9ef6c9e2b6fae9dacdf63ccc6523136189af37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:07:54 compute-0 podman[285048]: 2025-11-25 10:07:54.353149305 +0000 UTC m=+0.088026421 container start cbac237e90b3e412b8d86305f9ef6c9e2b6fae9dacdf63ccc6523136189af37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_clarke, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:07:54 compute-0 podman[285048]: 2025-11-25 10:07:54.355222873 +0000 UTC m=+0.090100001 container attach cbac237e90b3e412b8d86305f9ef6c9e2b6fae9dacdf63ccc6523136189af37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_clarke, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 10:07:54 compute-0 nervous_clarke[285062]: 167 167
Nov 25 10:07:54 compute-0 systemd[1]: libpod-cbac237e90b3e412b8d86305f9ef6c9e2b6fae9dacdf63ccc6523136189af37d.scope: Deactivated successfully.
Nov 25 10:07:54 compute-0 podman[285048]: 2025-11-25 10:07:54.3574178 +0000 UTC m=+0.092294928 container died cbac237e90b3e412b8d86305f9ef6c9e2b6fae9dacdf63ccc6523136189af37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-479f76d44e54d8a4a5c0d6abc481b51569f91bca67715d89c2e52d7a8502d3d7-merged.mount: Deactivated successfully.
Nov 25 10:07:54 compute-0 podman[285059]: 2025-11-25 10:07:54.376206653 +0000 UTC m=+0.052570986 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 25 10:07:54 compute-0 podman[285048]: 2025-11-25 10:07:54.283985857 +0000 UTC m=+0.018863005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:07:54 compute-0 podman[285048]: 2025-11-25 10:07:54.384140737 +0000 UTC m=+0.119017864 container remove cbac237e90b3e412b8d86305f9ef6c9e2b6fae9dacdf63ccc6523136189af37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 10:07:54 compute-0 systemd[1]: libpod-conmon-cbac237e90b3e412b8d86305f9ef6c9e2b6fae9dacdf63ccc6523136189af37d.scope: Deactivated successfully.
Nov 25 10:07:54 compute-0 podman[285101]: 2025-11-25 10:07:54.513230211 +0000 UTC m=+0.031929852 container create 070ea359d6f42f8e3b55f68ab98883a5434aa1cb51b24fb3f8be5a29599be1b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_edison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 10:07:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1289407423' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:07:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1289407423' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:07:54 compute-0 systemd[1]: Started libpod-conmon-070ea359d6f42f8e3b55f68ab98883a5434aa1cb51b24fb3f8be5a29599be1b8.scope.
Nov 25 10:07:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1a23f829b3f2f4a6dfdcc47a75a2cfded190322b57f7922ae6e7af9af307d87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1a23f829b3f2f4a6dfdcc47a75a2cfded190322b57f7922ae6e7af9af307d87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1a23f829b3f2f4a6dfdcc47a75a2cfded190322b57f7922ae6e7af9af307d87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1a23f829b3f2f4a6dfdcc47a75a2cfded190322b57f7922ae6e7af9af307d87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:54 compute-0 podman[285101]: 2025-11-25 10:07:54.573343372 +0000 UTC m=+0.092043012 container init 070ea359d6f42f8e3b55f68ab98883a5434aa1cb51b24fb3f8be5a29599be1b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_edison, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:07:54 compute-0 podman[285101]: 2025-11-25 10:07:54.579691716 +0000 UTC m=+0.098391356 container start 070ea359d6f42f8e3b55f68ab98883a5434aa1cb51b24fb3f8be5a29599be1b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_edison, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:07:54 compute-0 podman[285101]: 2025-11-25 10:07:54.580847855 +0000 UTC m=+0.099547495 container attach 070ea359d6f42f8e3b55f68ab98883a5434aa1cb51b24fb3f8be5a29599be1b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_edison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 25 10:07:54 compute-0 podman[285101]: 2025-11-25 10:07:54.500838015 +0000 UTC m=+0.019537645 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:07:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:54.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:54 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 649 B/s rd, 0 op/s
Nov 25 10:07:54 compute-0 relaxed_edison[285114]: {
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:     "1": [
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:         {
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             "devices": [
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "/dev/loop3"
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             ],
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             "lv_name": "ceph_lv0",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             "lv_size": "21470642176",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             "name": "ceph_lv0",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             "tags": {
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.cluster_name": "ceph",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.crush_device_class": "",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.encrypted": "0",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.osd_id": "1",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.type": "block",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.vdo": "0",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:                 "ceph.with_tpm": "0"
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             },
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             "type": "block",
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:             "vg_name": "ceph_vg0"
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:         }
Nov 25 10:07:54 compute-0 relaxed_edison[285114]:     ]
Nov 25 10:07:54 compute-0 relaxed_edison[285114]: }
Nov 25 10:07:54 compute-0 systemd[1]: libpod-070ea359d6f42f8e3b55f68ab98883a5434aa1cb51b24fb3f8be5a29599be1b8.scope: Deactivated successfully.
Nov 25 10:07:54 compute-0 podman[285123]: 2025-11-25 10:07:54.886714376 +0000 UTC m=+0.023581958 container died 070ea359d6f42f8e3b55f68ab98883a5434aa1cb51b24fb3f8be5a29599be1b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 25 10:07:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:54.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1a23f829b3f2f4a6dfdcc47a75a2cfded190322b57f7922ae6e7af9af307d87-merged.mount: Deactivated successfully.
Nov 25 10:07:54 compute-0 podman[285123]: 2025-11-25 10:07:54.907575925 +0000 UTC m=+0.044443497 container remove 070ea359d6f42f8e3b55f68ab98883a5434aa1cb51b24fb3f8be5a29599be1b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_edison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:07:54 compute-0 systemd[1]: libpod-conmon-070ea359d6f42f8e3b55f68ab98883a5434aa1cb51b24fb3f8be5a29599be1b8.scope: Deactivated successfully.
Nov 25 10:07:54 compute-0 nova_compute[253512]: 2025-11-25 10:07:54.921 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:54 compute-0 sudo[284992]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:54 compute-0 sudo[285135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:07:54 compute-0 sudo[285135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:54 compute-0 sudo[285135]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:55 compute-0 sudo[285160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 10:07:55 compute-0 sudo[285160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:55 compute-0 podman[285217]: 2025-11-25 10:07:55.369207007 +0000 UTC m=+0.033505172 container create ec281b9de406102d0bfe0c43375dee7b8a8541e15ecfacce6399694c06ea35a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:07:55 compute-0 systemd[1]: Started libpod-conmon-ec281b9de406102d0bfe0c43375dee7b8a8541e15ecfacce6399694c06ea35a7.scope.
Nov 25 10:07:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:07:55 compute-0 podman[285217]: 2025-11-25 10:07:55.42604844 +0000 UTC m=+0.090346625 container init ec281b9de406102d0bfe0c43375dee7b8a8541e15ecfacce6399694c06ea35a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hoover, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:07:55 compute-0 podman[285217]: 2025-11-25 10:07:55.432576233 +0000 UTC m=+0.096874399 container start ec281b9de406102d0bfe0c43375dee7b8a8541e15ecfacce6399694c06ea35a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hoover, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 10:07:55 compute-0 podman[285217]: 2025-11-25 10:07:55.436391184 +0000 UTC m=+0.100689349 container attach ec281b9de406102d0bfe0c43375dee7b8a8541e15ecfacce6399694c06ea35a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 25 10:07:55 compute-0 unruffled_hoover[285230]: 167 167
Nov 25 10:07:55 compute-0 systemd[1]: libpod-ec281b9de406102d0bfe0c43375dee7b8a8541e15ecfacce6399694c06ea35a7.scope: Deactivated successfully.
Nov 25 10:07:55 compute-0 podman[285217]: 2025-11-25 10:07:55.438513334 +0000 UTC m=+0.102811499 container died ec281b9de406102d0bfe0c43375dee7b8a8541e15ecfacce6399694c06ea35a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hoover, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 25 10:07:55 compute-0 podman[285217]: 2025-11-25 10:07:55.353555516 +0000 UTC m=+0.017853692 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-18e586a87896372fa10ffa08318c80f1adbe6fce06e76ff14ea8c679d8120460-merged.mount: Deactivated successfully.
Nov 25 10:07:55 compute-0 podman[285217]: 2025-11-25 10:07:55.458563524 +0000 UTC m=+0.122861688 container remove ec281b9de406102d0bfe0c43375dee7b8a8541e15ecfacce6399694c06ea35a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hoover, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 10:07:55 compute-0 systemd[1]: libpod-conmon-ec281b9de406102d0bfe0c43375dee7b8a8541e15ecfacce6399694c06ea35a7.scope: Deactivated successfully.
Nov 25 10:07:55 compute-0 ceph-mon[74207]: pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 649 B/s rd, 0 op/s
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:07:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:07:55 compute-0 podman[285252]: 2025-11-25 10:07:55.596462443 +0000 UTC m=+0.033234281 container create 96435d0c979a968b53fe192eb0843c6a22c7db193e2ddab9c88bbcbab385e3d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_allen, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:07:55 compute-0 systemd[1]: Started libpod-conmon-96435d0c979a968b53fe192eb0843c6a22c7db193e2ddab9c88bbcbab385e3d5.scope.
Nov 25 10:07:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7169f1605f4d4c8438cfda33099be8faa962565089d22ade8b5fcda4974cac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7169f1605f4d4c8438cfda33099be8faa962565089d22ade8b5fcda4974cac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7169f1605f4d4c8438cfda33099be8faa962565089d22ade8b5fcda4974cac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7169f1605f4d4c8438cfda33099be8faa962565089d22ade8b5fcda4974cac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:07:55 compute-0 podman[285252]: 2025-11-25 10:07:55.673341442 +0000 UTC m=+0.110113280 container init 96435d0c979a968b53fe192eb0843c6a22c7db193e2ddab9c88bbcbab385e3d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 10:07:55 compute-0 podman[285252]: 2025-11-25 10:07:55.583702072 +0000 UTC m=+0.020473920 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:07:55 compute-0 podman[285252]: 2025-11-25 10:07:55.67958028 +0000 UTC m=+0.116352118 container start 96435d0c979a968b53fe192eb0843c6a22c7db193e2ddab9c88bbcbab385e3d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_allen, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:07:55 compute-0 podman[285252]: 2025-11-25 10:07:55.682669714 +0000 UTC m=+0.119441551 container attach 96435d0c979a968b53fe192eb0843c6a22c7db193e2ddab9c88bbcbab385e3d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:07:56 compute-0 vibrant_allen[285265]: {}
Nov 25 10:07:56 compute-0 lvm[285344]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:07:56 compute-0 lvm[285344]: VG ceph_vg0 finished
Nov 25 10:07:56 compute-0 systemd[1]: libpod-96435d0c979a968b53fe192eb0843c6a22c7db193e2ddab9c88bbcbab385e3d5.scope: Deactivated successfully.
Nov 25 10:07:56 compute-0 podman[285252]: 2025-11-25 10:07:56.23178324 +0000 UTC m=+0.668555088 container died 96435d0c979a968b53fe192eb0843c6a22c7db193e2ddab9c88bbcbab385e3d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:07:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d7169f1605f4d4c8438cfda33099be8faa962565089d22ade8b5fcda4974cac-merged.mount: Deactivated successfully.
Nov 25 10:07:56 compute-0 podman[285252]: 2025-11-25 10:07:56.255045904 +0000 UTC m=+0.691817743 container remove 96435d0c979a968b53fe192eb0843c6a22c7db193e2ddab9c88bbcbab385e3d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_allen, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:07:56 compute-0 systemd[1]: libpod-conmon-96435d0c979a968b53fe192eb0843c6a22c7db193e2ddab9c88bbcbab385e3d5.scope: Deactivated successfully.
Nov 25 10:07:56 compute-0 sudo[285160]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:07:56 compute-0 nova_compute[253512]: 2025-11-25 10:07:56.288 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:56 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:56 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:07:56 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:56 compute-0 sudo[285356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:07:56 compute-0 sudo[285356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:07:56 compute-0 sudo[285356]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:56.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:56 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 649 B/s rd, 0 op/s
Nov 25 10:07:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:56.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:57.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:57.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:57.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:57.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:57 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:57 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:07:57 compute-0 ceph-mon[74207]: pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 649 B/s rd, 0 op/s
Nov 25 10:07:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:07:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:07:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:58.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:07:58 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 649 B/s rd, 0 op/s
Nov 25 10:07:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:58.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:58.891Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:58.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:07:58.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:07:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:07:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:07:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:58.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:07:59 compute-0 ceph-mon[74207]: pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 649 B/s rd, 0 op/s
Nov 25 10:07:59 compute-0 nova_compute[253512]: 2025-11-25 10:07:59.921 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:07:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:07:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:08:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:08:00] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:08:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:08:00] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:08:00 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 324 B/s rd, 0 op/s
Nov 25 10:08:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:00.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:08:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:00.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:01 compute-0 nova_compute[253512]: 2025-11-25 10:08:01.290 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:01 compute-0 ceph-mon[74207]: pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 324 B/s rd, 0 op/s
Nov 25 10:08:02 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Nov 25 10:08:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:02.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:02.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:03 compute-0 ceph-mon[74207]: pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Nov 25 10:08:04 compute-0 sudo[285389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:08:04 compute-0 sudo[285389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:04 compute-0 sudo[285389]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:04 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:08:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:04.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:08:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:04.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:04 compute-0 nova_compute[253512]: 2025-11-25 10:08:04.925 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:08:05.393 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:08:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:08:05.394 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:08:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:08:05.394 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:08:05 compute-0 ceph-mon[74207]: pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:06 compute-0 nova_compute[253512]: 2025-11-25 10:08:06.291 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:06 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:06.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:06.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:07.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:07.107Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:07.107Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:07.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:07 compute-0 ceph-mon[74207]: pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:08 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:08.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:08.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:08.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:08.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:08.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:08:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:08:09 compute-0 ceph-mon[74207]: pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:09 compute-0 nova_compute[253512]: 2025-11-25 10:08:09.925 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:08:10] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:08:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:08:10] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:08:10 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:10.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:10.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:10 compute-0 podman[285420]: 2025-11-25 10:08:10.998098225 +0000 UTC m=+0.046459379 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:08:11 compute-0 nova_compute[253512]: 2025-11-25 10:08:11.294 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:11 compute-0 ceph-mon[74207]: pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:12 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:12.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:12.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:13 compute-0 ceph-mon[74207]: pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:14 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:14.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:14.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:14 compute-0 nova_compute[253512]: 2025-11-25 10:08:14.927 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:08:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:08:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:08:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:08:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:08:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:08:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:08:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:08:15 compute-0 ceph-mon[74207]: pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:08:16 compute-0 nova_compute[253512]: 2025-11-25 10:08:16.297 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:16 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:16.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:16.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:16 compute-0 podman[285442]: 2025-11-25 10:08:16.995796162 +0000 UTC m=+0.054477361 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:08:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:17.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:17.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:17.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:17.107Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:17 compute-0 ceph-mon[74207]: pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:18 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:18.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:18.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:18.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:18.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:18.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:18.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:19 compute-0 nova_compute[253512]: 2025-11-25 10:08:19.929 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:19 compute-0 ceph-mon[74207]: pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:08:20] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:08:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:08:20] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:08:20 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:20.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:20.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:21 compute-0 nova_compute[253512]: 2025-11-25 10:08:21.299 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:21 compute-0 ceph-mon[74207]: pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:22 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:22.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:08:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:22.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:08:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:23 compute-0 ceph-mon[74207]: pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:24 compute-0 sudo[285474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:08:24 compute-0 sudo[285474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:24 compute-0 sudo[285474]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:24 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:24.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:24.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:24 compute-0 nova_compute[253512]: 2025-11-25 10:08:24.930 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:24 compute-0 podman[285499]: 2025-11-25 10:08:24.975875956 +0000 UTC m=+0.039071754 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 10:08:25 compute-0 ceph-mon[74207]: pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:26 compute-0 nova_compute[253512]: 2025-11-25 10:08:26.301 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:26 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:08:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:26.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:08:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:26.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:27.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:27.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:27.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:27.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:27 compute-0 ceph-mon[74207]: pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:28 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:28.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:28.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:28.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:28.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:28.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:28.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:28 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2168162688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:29 compute-0 nova_compute[253512]: 2025-11-25 10:08:29.931 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:08:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:08:29 compute-0 ceph-mon[74207]: pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:29 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/690462983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:29 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:08:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:08:30] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:08:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:08:30] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:08:30 compute-0 nova_compute[253512]: 2025-11-25 10:08:30.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:30 compute-0 nova_compute[253512]: 2025-11-25 10:08:30.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:30 compute-0 nova_compute[253512]: 2025-11-25 10:08:30.487 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:08:30 compute-0 nova_compute[253512]: 2025-11-25 10:08:30.487 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:08:30 compute-0 nova_compute[253512]: 2025-11-25 10:08:30.487 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:08:30 compute-0 nova_compute[253512]: 2025-11-25 10:08:30.488 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:08:30 compute-0 nova_compute[253512]: 2025-11-25 10:08:30.488 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:08:30 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:30 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:08:30 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313080343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:30.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:30 compute-0 nova_compute[253512]: 2025-11-25 10:08:30.854 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:08:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:30.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:31 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3313080343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.077 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.078 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4560MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.078 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.078 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.170 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.171 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.217 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Refreshing inventories for resource provider d9873737-caae-40cc-9346-77a33537057c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.285 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Updating ProviderTree inventory for provider d9873737-caae-40cc-9346-77a33537057c from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.286 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Updating inventory in ProviderTree for provider d9873737-caae-40cc-9346-77a33537057c with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.302 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Refreshing aggregate associations for resource provider d9873737-caae-40cc-9346-77a33537057c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.303 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.329 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Refreshing trait associations for resource provider d9873737-caae-40cc-9346-77a33537057c, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX512VPCLMULQDQ,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX512VAES,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.344 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:08:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:08:31 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314521822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.728 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.732 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.744 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.745 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:08:31 compute-0 nova_compute[253512]: 2025-11-25 10:08:31.745 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:08:32 compute-0 ceph-mon[74207]: pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:32 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1314521822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:32 compute-0 nova_compute[253512]: 2025-11-25 10:08:32.746 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:32 compute-0 nova_compute[253512]: 2025-11-25 10:08:32.746 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:32 compute-0 nova_compute[253512]: 2025-11-25 10:08:32.746 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:32 compute-0 nova_compute[253512]: 2025-11-25 10:08:32.747 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:32 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:32.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:32.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:33 compute-0 ceph-mon[74207]: pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:34 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/35916024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:34 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:34.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:34.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:34 compute-0 nova_compute[253512]: 2025-11-25 10:08:34.932 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:35 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/341417532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:08:35 compute-0 ceph-mon[74207]: pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:36 compute-0 nova_compute[253512]: 2025-11-25 10:08:36.304 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:36 compute-0 nova_compute[253512]: 2025-11-25 10:08:36.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:36 compute-0 nova_compute[253512]: 2025-11-25 10:08:36.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:08:36 compute-0 nova_compute[253512]: 2025-11-25 10:08:36.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:08:36 compute-0 nova_compute[253512]: 2025-11-25 10:08:36.482 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:08:36 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:36.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:08:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:36.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:08:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:37.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:37.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:37.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:37.126Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:37 compute-0 nova_compute[253512]: 2025-11-25 10:08:37.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:37 compute-0 nova_compute[253512]: 2025-11-25 10:08:37.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:08:37 compute-0 nova_compute[253512]: 2025-11-25 10:08:37.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:37 compute-0 nova_compute[253512]: 2025-11-25 10:08:37.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 10:08:37 compute-0 ceph-mon[74207]: pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:38 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:08:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:08:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:38.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:38.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:38.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:38.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:38.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:39 compute-0 nova_compute[253512]: 2025-11-25 10:08:39.480 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:39 compute-0 ceph-mon[74207]: pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:39 compute-0 nova_compute[253512]: 2025-11-25 10:08:39.934 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:08:40] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:08:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:08:40] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:08:40 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:40.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:40.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:41 compute-0 nova_compute[253512]: 2025-11-25 10:08:41.306 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:41 compute-0 ceph-mon[74207]: pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:41 compute-0 podman[285577]: 2025-11-25 10:08:41.973395212 +0000 UTC m=+0.035010540 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:08:42 compute-0 nova_compute[253512]: 2025-11-25 10:08:42.468 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:42 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:42.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:42.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:43 compute-0 ceph-mon[74207]: pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:44 compute-0 sudo[285596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:08:44 compute-0 sudo[285596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:44 compute-0 sudo[285596]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:44 compute-0 nova_compute[253512]: 2025-11-25 10:08:44.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:44 compute-0 nova_compute[253512]: 2025-11-25 10:08:44.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 10:08:44 compute-0 nova_compute[253512]: 2025-11-25 10:08:44.485 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 10:08:44 compute-0 nova_compute[253512]: 2025-11-25 10:08:44.485 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:08:44 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:44.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:44.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:44 compute-0 nova_compute[253512]: 2025-11-25 10:08:44.936 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_10:08:44
Nov 25 10:08:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:08:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 10:08:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.data', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.meta', '.rgw.root', 'images', 'default.rgw.control']
Nov 25 10:08:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 10:08:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:08:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:08:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:08:45 compute-0 ceph-mon[74207]: pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:08:46 compute-0 nova_compute[253512]: 2025-11-25 10:08:46.308 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:46 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:46.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:46.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:47.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:47.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:47.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:47.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:47 compute-0 ceph-mon[74207]: pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:47 compute-0 podman[285624]: 2025-11-25 10:08:47.98978584 +0000 UTC m=+0.053439786 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 10:08:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:48 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:48.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:48.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:48.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:48.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:48.891Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:48.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:49 compute-0 ceph-mon[74207]: pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:49 compute-0 nova_compute[253512]: 2025-11-25 10:08:49.938 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:08:50] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:08:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:08:50] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:08:50 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:50.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:50.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:51 compute-0 nova_compute[253512]: 2025-11-25 10:08:51.311 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:51 compute-0 ceph-mon[74207]: pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:52 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:52.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:08:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:52.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:08:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:53 compute-0 ceph-mon[74207]: pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.951209) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333951232, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2368, "num_deletes": 503, "total_data_size": 4258517, "memory_usage": 4317712, "flush_reason": "Manual Compaction"}
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333958595, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 4012324, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29853, "largest_seqno": 32220, "table_properties": {"data_size": 4002264, "index_size": 5914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 24263, "raw_average_key_size": 20, "raw_value_size": 3980130, "raw_average_value_size": 3300, "num_data_blocks": 254, "num_entries": 1206, "num_filter_entries": 1206, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764065125, "oldest_key_time": 1764065125, "file_creation_time": 1764065333, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 7412 microseconds, and 5616 cpu microseconds.
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.958621) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 4012324 bytes OK
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.958634) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.960505) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.960515) EVENT_LOG_v1 {"time_micros": 1764065333960512, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.960525) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4247966, prev total WAL file size 4247966, number of live WAL files 2.
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.961190) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3918KB)], [68(14MB)]
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333961217, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 18796736, "oldest_snapshot_seqno": -1}
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6331 keys, 12739902 bytes, temperature: kUnknown
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333987778, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 12739902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12699250, "index_size": 23710, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 165354, "raw_average_key_size": 26, "raw_value_size": 12586572, "raw_average_value_size": 1988, "num_data_blocks": 937, "num_entries": 6331, "num_filter_entries": 6331, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063076, "oldest_key_time": 0, "file_creation_time": 1764065333, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ea9635bc-b5c0-4bcc-b39b-aa36751871d5", "db_session_id": "YG22O1RMAUAP611HLIK5", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.987944) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 12739902 bytes
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.992377) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 706.4 rd, 478.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 14.1 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(7.9) write-amplify(3.2) OK, records in: 7349, records dropped: 1018 output_compression: NoCompression
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.992390) EVENT_LOG_v1 {"time_micros": 1764065333992383, "job": 38, "event": "compaction_finished", "compaction_time_micros": 26610, "compaction_time_cpu_micros": 20436, "output_level": 6, "num_output_files": 1, "total_output_size": 12739902, "num_input_records": 7349, "num_output_records": 6331, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333992922, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333994882, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.961137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.994997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:08:53 compute-0 ceph-mon[74207]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:08:54 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:54.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:54 compute-0 nova_compute[253512]: 2025-11-25 10:08:54.941 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:54.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:08:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:08:55 compute-0 ceph-mon[74207]: pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:08:55 compute-0 podman[285656]: 2025-11-25 10:08:55.98264938 +0000 UTC m=+0.042803638 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 10:08:56 compute-0 nova_compute[253512]: 2025-11-25 10:08:56.313 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:56 compute-0 sudo[285674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:08:56 compute-0 sudo[285674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:56 compute-0 sudo[285674]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:56 compute-0 sudo[285699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:08:56 compute-0 sudo[285699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:56 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:08:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:56.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:08:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:56.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:56 compute-0 sudo[285699]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:57.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:57.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:57.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:57.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 25 10:08:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:08:57 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 25 10:08:57 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:08:57 compute-0 ceph-mon[74207]: pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:08:57 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:08:57 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:08:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:08:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:08:58 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:08:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 10:08:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:08:58 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 530 B/s rd, 0 op/s
Nov 25 10:08:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 10:08:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:08:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 10:08:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:08:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 10:08:58 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:08:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 10:08:58 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:08:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:08:58 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:08:58 compute-0 sudo[285755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:08:58 compute-0 sudo[285755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:58 compute-0 sudo[285755]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:58 compute-0 sudo[285780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 10:08:58 compute-0 sudo[285780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:58 compute-0 podman[285836]: 2025-11-25 10:08:58.789477816 +0000 UTC m=+0.029052570 container create 597cda24015c593d21c721a50000366e5dca14d573c799bf2452df36a52c06d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:08:58 compute-0 systemd[1]: Started libpod-conmon-597cda24015c593d21c721a50000366e5dca14d573c799bf2452df36a52c06d5.scope.
Nov 25 10:08:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:08:58 compute-0 podman[285836]: 2025-11-25 10:08:58.851056891 +0000 UTC m=+0.090631665 container init 597cda24015c593d21c721a50000366e5dca14d573c799bf2452df36a52c06d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_buck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:08:58 compute-0 podman[285836]: 2025-11-25 10:08:58.85553489 +0000 UTC m=+0.095109655 container start 597cda24015c593d21c721a50000366e5dca14d573c799bf2452df36a52c06d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:08:58 compute-0 podman[285836]: 2025-11-25 10:08:58.856719113 +0000 UTC m=+0.096293897 container attach 597cda24015c593d21c721a50000366e5dca14d573c799bf2452df36a52c06d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:08:58 compute-0 sweet_buck[285849]: 167 167
Nov 25 10:08:58 compute-0 systemd[1]: libpod-597cda24015c593d21c721a50000366e5dca14d573c799bf2452df36a52c06d5.scope: Deactivated successfully.
Nov 25 10:08:58 compute-0 conmon[285849]: conmon 597cda24015c593d21c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-597cda24015c593d21c721a50000366e5dca14d573c799bf2452df36a52c06d5.scope/container/memory.events
Nov 25 10:08:58 compute-0 podman[285836]: 2025-11-25 10:08:58.86116886 +0000 UTC m=+0.100743623 container died 597cda24015c593d21c721a50000366e5dca14d573c799bf2452df36a52c06d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 10:08:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:08:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:58.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:08:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-be758bd0a74e55a7da67144d8ab085ce813edb02a8886b5abf85d0047b2f70e6-merged.mount: Deactivated successfully.
Nov 25 10:08:58 compute-0 podman[285836]: 2025-11-25 10:08:58.777880407 +0000 UTC m=+0.017455191 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:08:58 compute-0 podman[285836]: 2025-11-25 10:08:58.880616314 +0000 UTC m=+0.120191078 container remove 597cda24015c593d21c721a50000366e5dca14d573c799bf2452df36a52c06d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_buck, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 25 10:08:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:58.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:58.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:58.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:08:58.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:08:58 compute-0 systemd[1]: libpod-conmon-597cda24015c593d21c721a50000366e5dca14d573c799bf2452df36a52c06d5.scope: Deactivated successfully.
Nov 25 10:08:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:08:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:08:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:58.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:08:58 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 25 10:08:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:08:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:08:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:08:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:08:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:08:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:08:58 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:08:59 compute-0 podman[285871]: 2025-11-25 10:08:59.006603978 +0000 UTC m=+0.029034735 container create 194201a8e3ade63deb76686645c5e922c51d980e187a43a8563f9640f480ef6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wing, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:08:59 compute-0 systemd[1]: Started libpod-conmon-194201a8e3ade63deb76686645c5e922c51d980e187a43a8563f9640f480ef6a.scope.
Nov 25 10:08:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d39d676a1dbdbbcf38057ccd463458824a3e28c3f0c34bd1b93b08a0f875b8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d39d676a1dbdbbcf38057ccd463458824a3e28c3f0c34bd1b93b08a0f875b8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d39d676a1dbdbbcf38057ccd463458824a3e28c3f0c34bd1b93b08a0f875b8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d39d676a1dbdbbcf38057ccd463458824a3e28c3f0c34bd1b93b08a0f875b8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d39d676a1dbdbbcf38057ccd463458824a3e28c3f0c34bd1b93b08a0f875b8a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:08:59 compute-0 podman[285871]: 2025-11-25 10:08:59.066236635 +0000 UTC m=+0.088667403 container init 194201a8e3ade63deb76686645c5e922c51d980e187a43a8563f9640f480ef6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wing, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 10:08:59 compute-0 podman[285871]: 2025-11-25 10:08:59.072433324 +0000 UTC m=+0.094864082 container start 194201a8e3ade63deb76686645c5e922c51d980e187a43a8563f9640f480ef6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:08:59 compute-0 podman[285871]: 2025-11-25 10:08:59.073548947 +0000 UTC m=+0.095979705 container attach 194201a8e3ade63deb76686645c5e922c51d980e187a43a8563f9640f480ef6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:08:59 compute-0 podman[285871]: 2025-11-25 10:08:58.995689997 +0000 UTC m=+0.018120775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:08:59 compute-0 quizzical_wing[285885]: --> passed data devices: 0 physical, 1 LVM
Nov 25 10:08:59 compute-0 quizzical_wing[285885]: --> All data devices are unavailable
Nov 25 10:08:59 compute-0 systemd[1]: libpod-194201a8e3ade63deb76686645c5e922c51d980e187a43a8563f9640f480ef6a.scope: Deactivated successfully.
Nov 25 10:08:59 compute-0 podman[285900]: 2025-11-25 10:08:59.367107119 +0000 UTC m=+0.018178234 container died 194201a8e3ade63deb76686645c5e922c51d980e187a43a8563f9640f480ef6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wing, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 25 10:08:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d39d676a1dbdbbcf38057ccd463458824a3e28c3f0c34bd1b93b08a0f875b8a-merged.mount: Deactivated successfully.
Nov 25 10:08:59 compute-0 podman[285900]: 2025-11-25 10:08:59.389031933 +0000 UTC m=+0.040103028 container remove 194201a8e3ade63deb76686645c5e922c51d980e187a43a8563f9640f480ef6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:08:59 compute-0 systemd[1]: libpod-conmon-194201a8e3ade63deb76686645c5e922c51d980e187a43a8563f9640f480ef6a.scope: Deactivated successfully.
Nov 25 10:08:59 compute-0 sudo[285780]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:59 compute-0 sudo[285912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:08:59 compute-0 sudo[285912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:59 compute-0 sudo[285912]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:59 compute-0 sudo[285937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 10:08:59 compute-0 sudo[285937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:08:59 compute-0 podman[285995]: 2025-11-25 10:08:59.817834957 +0000 UTC m=+0.027225215 container create d855ef304cc697b9e046376ce226eef865e7ed0731fb9cdacbda8b1bfe5abc23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:08:59 compute-0 systemd[1]: Started libpod-conmon-d855ef304cc697b9e046376ce226eef865e7ed0731fb9cdacbda8b1bfe5abc23.scope.
Nov 25 10:08:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:08:59 compute-0 podman[285995]: 2025-11-25 10:08:59.865416569 +0000 UTC m=+0.074806838 container init d855ef304cc697b9e046376ce226eef865e7ed0731fb9cdacbda8b1bfe5abc23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_solomon, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:08:59 compute-0 podman[285995]: 2025-11-25 10:08:59.869576279 +0000 UTC m=+0.078966537 container start d855ef304cc697b9e046376ce226eef865e7ed0731fb9cdacbda8b1bfe5abc23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_solomon, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:08:59 compute-0 podman[285995]: 2025-11-25 10:08:59.870851682 +0000 UTC m=+0.080241952 container attach d855ef304cc697b9e046376ce226eef865e7ed0731fb9cdacbda8b1bfe5abc23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_solomon, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:08:59 compute-0 agitated_solomon[286008]: 167 167
Nov 25 10:08:59 compute-0 systemd[1]: libpod-d855ef304cc697b9e046376ce226eef865e7ed0731fb9cdacbda8b1bfe5abc23.scope: Deactivated successfully.
Nov 25 10:08:59 compute-0 conmon[286008]: conmon d855ef304cc697b9e046 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d855ef304cc697b9e046376ce226eef865e7ed0731fb9cdacbda8b1bfe5abc23.scope/container/memory.events
Nov 25 10:08:59 compute-0 podman[285995]: 2025-11-25 10:08:59.87433601 +0000 UTC m=+0.083726268 container died d855ef304cc697b9e046376ce226eef865e7ed0731fb9cdacbda8b1bfe5abc23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_solomon, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:08:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f7ec012b2266d6b2d0f6ba70824233406ced79f52d110bab5e6f636f6830a7c-merged.mount: Deactivated successfully.
Nov 25 10:08:59 compute-0 podman[285995]: 2025-11-25 10:08:59.897793774 +0000 UTC m=+0.107184033 container remove d855ef304cc697b9e046376ce226eef865e7ed0731fb9cdacbda8b1bfe5abc23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_solomon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 10:08:59 compute-0 podman[285995]: 2025-11-25 10:08:59.806983443 +0000 UTC m=+0.016373722 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:08:59 compute-0 systemd[1]: libpod-conmon-d855ef304cc697b9e046376ce226eef865e7ed0731fb9cdacbda8b1bfe5abc23.scope: Deactivated successfully.
Nov 25 10:08:59 compute-0 nova_compute[253512]: 2025-11-25 10:08:59.942 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:08:59 compute-0 ceph-mon[74207]: pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 530 B/s rd, 0 op/s
Nov 25 10:08:59 compute-0 ceph-mon[74207]: Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 25 10:08:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:08:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:09:00 compute-0 podman[286031]: 2025-11-25 10:09:00.017423473 +0000 UTC m=+0.027642050 container create 0e8ff2dbe6371434cd6c10f1e32944625234a355978bac4545ebaf79a6583514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_poitras, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 10:09:00 compute-0 systemd[1]: Started libpod-conmon-0e8ff2dbe6371434cd6c10f1e32944625234a355978bac4545ebaf79a6583514.scope.
Nov 25 10:09:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871d66c2ba10d783424b25cbbba53b0b4533ed930139fb7e37bab36c9b897bb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871d66c2ba10d783424b25cbbba53b0b4533ed930139fb7e37bab36c9b897bb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871d66c2ba10d783424b25cbbba53b0b4533ed930139fb7e37bab36c9b897bb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871d66c2ba10d783424b25cbbba53b0b4533ed930139fb7e37bab36c9b897bb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:09:00 compute-0 podman[286031]: 2025-11-25 10:09:00.077373097 +0000 UTC m=+0.087591694 container init 0e8ff2dbe6371434cd6c10f1e32944625234a355978bac4545ebaf79a6583514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_poitras, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:09:00 compute-0 podman[286031]: 2025-11-25 10:09:00.083401259 +0000 UTC m=+0.093619835 container start 0e8ff2dbe6371434cd6c10f1e32944625234a355978bac4545ebaf79a6583514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_poitras, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 25 10:09:00 compute-0 podman[286031]: 2025-11-25 10:09:00.084553501 +0000 UTC m=+0.094772078 container attach 0e8ff2dbe6371434cd6c10f1e32944625234a355978bac4545ebaf79a6583514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:09:00 compute-0 podman[286031]: 2025-11-25 10:09:00.006641401 +0000 UTC m=+0.016859998 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:09:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:09:00] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:09:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:09:00] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:09:00 compute-0 cranky_poitras[286044]: {
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:     "1": [
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:         {
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             "devices": [
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "/dev/loop3"
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             ],
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             "lv_name": "ceph_lv0",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             "lv_size": "21470642176",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             "name": "ceph_lv0",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             "tags": {
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.cluster_name": "ceph",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.crush_device_class": "",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.encrypted": "0",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.osd_id": "1",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.type": "block",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.vdo": "0",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:                 "ceph.with_tpm": "0"
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             },
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             "type": "block",
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:             "vg_name": "ceph_vg0"
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:         }
Nov 25 10:09:00 compute-0 cranky_poitras[286044]:     ]
Nov 25 10:09:00 compute-0 cranky_poitras[286044]: }
Nov 25 10:09:00 compute-0 systemd[1]: libpod-0e8ff2dbe6371434cd6c10f1e32944625234a355978bac4545ebaf79a6583514.scope: Deactivated successfully.
Nov 25 10:09:00 compute-0 podman[286031]: 2025-11-25 10:09:00.325467548 +0000 UTC m=+0.335686125 container died 0e8ff2dbe6371434cd6c10f1e32944625234a355978bac4545ebaf79a6583514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 10:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-871d66c2ba10d783424b25cbbba53b0b4533ed930139fb7e37bab36c9b897bb4-merged.mount: Deactivated successfully.
Nov 25 10:09:00 compute-0 podman[286031]: 2025-11-25 10:09:00.347171697 +0000 UTC m=+0.357390274 container remove 0e8ff2dbe6371434cd6c10f1e32944625234a355978bac4545ebaf79a6583514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:09:00 compute-0 systemd[1]: libpod-conmon-0e8ff2dbe6371434cd6c10f1e32944625234a355978bac4545ebaf79a6583514.scope: Deactivated successfully.
Nov 25 10:09:00 compute-0 sudo[285937]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:00 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 530 B/s rd, 0 op/s
Nov 25 10:09:00 compute-0 sudo[286062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:09:00 compute-0 sudo[286062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:09:00 compute-0 sudo[286062]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:00 compute-0 sudo[286087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 10:09:00 compute-0 sudo[286087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:09:00 compute-0 nova_compute[253512]: 2025-11-25 10:09:00.678 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:00 compute-0 podman[286142]: 2025-11-25 10:09:00.767140068 +0000 UTC m=+0.028582143 container create 630f5e81fe573d291f5465b2b83f55b6b225235c4b727c17d4e917dce87514eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bartik, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:09:00 compute-0 systemd[1]: Started libpod-conmon-630f5e81fe573d291f5465b2b83f55b6b225235c4b727c17d4e917dce87514eb.scope.
Nov 25 10:09:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:09:00 compute-0 podman[286142]: 2025-11-25 10:09:00.818961963 +0000 UTC m=+0.080404048 container init 630f5e81fe573d291f5465b2b83f55b6b225235c4b727c17d4e917dce87514eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bartik, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 25 10:09:00 compute-0 podman[286142]: 2025-11-25 10:09:00.824136897 +0000 UTC m=+0.085578972 container start 630f5e81fe573d291f5465b2b83f55b6b225235c4b727c17d4e917dce87514eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bartik, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 25 10:09:00 compute-0 podman[286142]: 2025-11-25 10:09:00.825218716 +0000 UTC m=+0.086660812 container attach 630f5e81fe573d291f5465b2b83f55b6b225235c4b727c17d4e917dce87514eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bartik, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 10:09:00 compute-0 lucid_bartik[286155]: 167 167
Nov 25 10:09:00 compute-0 systemd[1]: libpod-630f5e81fe573d291f5465b2b83f55b6b225235c4b727c17d4e917dce87514eb.scope: Deactivated successfully.
Nov 25 10:09:00 compute-0 podman[286142]: 2025-11-25 10:09:00.754967435 +0000 UTC m=+0.016409510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:09:00 compute-0 podman[286160]: 2025-11-25 10:09:00.858740999 +0000 UTC m=+0.016917227 container died 630f5e81fe573d291f5465b2b83f55b6b225235c4b727c17d4e917dce87514eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bartik, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 25 10:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-51161acd37827245d9729d55cbf9203b777c92bda7d631ad87cdb26e4d9bfa10-merged.mount: Deactivated successfully.
Nov 25 10:09:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:00.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:00 compute-0 podman[286160]: 2025-11-25 10:09:00.876199656 +0000 UTC m=+0.034375863 container remove 630f5e81fe573d291f5465b2b83f55b6b225235c4b727c17d4e917dce87514eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 10:09:00 compute-0 systemd[1]: libpod-conmon-630f5e81fe573d291f5465b2b83f55b6b225235c4b727c17d4e917dce87514eb.scope: Deactivated successfully.
Nov 25 10:09:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:00.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:09:01 compute-0 podman[286179]: 2025-11-25 10:09:01.000993479 +0000 UTC m=+0.029796422 container create 0559deadfa7a1ee86db52c84b23c5691443cc0847714f551fd5d083e7d0abc28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Nov 25 10:09:01 compute-0 systemd[1]: Started libpod-conmon-0559deadfa7a1ee86db52c84b23c5691443cc0847714f551fd5d083e7d0abc28.scope.
Nov 25 10:09:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1442a56c8a641f2ae6200e59e248d51f5f0a25260601b88630fd40038a3d54f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1442a56c8a641f2ae6200e59e248d51f5f0a25260601b88630fd40038a3d54f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1442a56c8a641f2ae6200e59e248d51f5f0a25260601b88630fd40038a3d54f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1442a56c8a641f2ae6200e59e248d51f5f0a25260601b88630fd40038a3d54f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:09:01 compute-0 podman[286179]: 2025-11-25 10:09:01.062971225 +0000 UTC m=+0.091774188 container init 0559deadfa7a1ee86db52c84b23c5691443cc0847714f551fd5d083e7d0abc28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_maxwell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:09:01 compute-0 podman[286179]: 2025-11-25 10:09:01.069327574 +0000 UTC m=+0.098130508 container start 0559deadfa7a1ee86db52c84b23c5691443cc0847714f551fd5d083e7d0abc28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_maxwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 10:09:01 compute-0 podman[286179]: 2025-11-25 10:09:01.070461803 +0000 UTC m=+0.099264745 container attach 0559deadfa7a1ee86db52c84b23c5691443cc0847714f551fd5d083e7d0abc28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 25 10:09:01 compute-0 podman[286179]: 2025-11-25 10:09:00.989470659 +0000 UTC m=+0.018273623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:09:01 compute-0 nova_compute[253512]: 2025-11-25 10:09:01.314 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:01 compute-0 funny_maxwell[286192]: {}
Nov 25 10:09:01 compute-0 lvm[286269]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:09:01 compute-0 lvm[286269]: VG ceph_vg0 finished
Nov 25 10:09:01 compute-0 systemd[1]: libpod-0559deadfa7a1ee86db52c84b23c5691443cc0847714f551fd5d083e7d0abc28.scope: Deactivated successfully.
Nov 25 10:09:01 compute-0 podman[286179]: 2025-11-25 10:09:01.556082187 +0000 UTC m=+0.584885129 container died 0559deadfa7a1ee86db52c84b23c5691443cc0847714f551fd5d083e7d0abc28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:09:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1442a56c8a641f2ae6200e59e248d51f5f0a25260601b88630fd40038a3d54f6-merged.mount: Deactivated successfully.
Nov 25 10:09:01 compute-0 podman[286179]: 2025-11-25 10:09:01.578075079 +0000 UTC m=+0.606878021 container remove 0559deadfa7a1ee86db52c84b23c5691443cc0847714f551fd5d083e7d0abc28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_maxwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 10:09:01 compute-0 systemd[1]: libpod-conmon-0559deadfa7a1ee86db52c84b23c5691443cc0847714f551fd5d083e7d0abc28.scope: Deactivated successfully.
Nov 25 10:09:01 compute-0 sudo[286087]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:09:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:09:01 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:09:01 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:09:01 compute-0 sudo[286280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:09:01 compute-0 sudo[286280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:09:01 compute-0 sudo[286280]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:01 compute-0 ceph-mon[74207]: pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 530 B/s rd, 0 op/s
Nov 25 10:09:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:09:01 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:09:02 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 25 10:09:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:02.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:02.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:03 compute-0 ceph-mon[74207]: pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 25 10:09:04 compute-0 sudo[286309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:09:04 compute-0 sudo[286309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:09:04 compute-0 sudo[286309]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:04 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 796 B/s rd, 0 op/s
Nov 25 10:09:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:04.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:04 compute-0 nova_compute[253512]: 2025-11-25 10:09:04.944 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:04.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:09:05.394 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:09:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:09:05.395 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:09:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:09:05.395 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:09:06 compute-0 ceph-mon[74207]: pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 796 B/s rd, 0 op/s
Nov 25 10:09:06 compute-0 nova_compute[253512]: 2025-11-25 10:09:06.316 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:06 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 25 10:09:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:06.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:06.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:07.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:07.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:07.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:07.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:08 compute-0 ceph-mon[74207]: pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 25 10:09:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:08 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 796 B/s rd, 0 op/s
Nov 25 10:09:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:08.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:08.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:08.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:08.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:08.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:08.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:09 compute-0 nova_compute[253512]: 2025-11-25 10:09:09.946 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:10 compute-0 ceph-mon[74207]: pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 796 B/s rd, 0 op/s
Nov 25 10:09:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:09:10] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:09:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:09:10] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Nov 25 10:09:10 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:10.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:10.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:11 compute-0 ceph-mon[74207]: pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:11 compute-0 nova_compute[253512]: 2025-11-25 10:09:11.317 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:12 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:09:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:12.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:12.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:12 compute-0 podman[286342]: 2025-11-25 10:09:12.97338407 +0000 UTC m=+0.036640111 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 10:09:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:13 compute-0 ceph-mon[74207]: pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:09:14 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:14.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:14 compute-0 nova_compute[253512]: 2025-11-25 10:09:14.948 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:14.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:09:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:09:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:09:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:09:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:09:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:09:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:09:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:09:15 compute-0 ceph-mon[74207]: pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:09:16 compute-0 nova_compute[253512]: 2025-11-25 10:09:16.319 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:16 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:16.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:16.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:17.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:17.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:17.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:17.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:17 compute-0 ceph-mon[74207]: pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:18 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:18.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:18.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:18.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:18.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:18.891Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:18.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:18 compute-0 podman[286364]: 2025-11-25 10:09:18.99233341 +0000 UTC m=+0.054360791 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 25 10:09:19 compute-0 ceph-mon[74207]: pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:19 compute-0 nova_compute[253512]: 2025-11-25 10:09:19.949 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:09:20] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:09:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:09:20] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:09:20 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:20.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:21 compute-0 nova_compute[253512]: 2025-11-25 10:09:21.321 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:21 compute-0 ceph-mon[74207]: pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:22 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:09:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:22.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:22.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:23 compute-0 ceph-mon[74207]: pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:09:24 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:24 compute-0 sudo[286393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:09:24 compute-0 sudo[286393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:09:24 compute-0 sudo[286393]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:24.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:24 compute-0 nova_compute[253512]: 2025-11-25 10:09:24.950 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:24.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:25 compute-0 ceph-mon[74207]: pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:26 compute-0 nova_compute[253512]: 2025-11-25 10:09:26.324 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:26 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:09:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:26.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:26.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:26 compute-0 podman[286420]: 2025-11-25 10:09:26.99777238 +0000 UTC m=+0.062904264 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 10:09:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:27.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:27.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:27.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:27.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:27 compute-0 ceph-mon[74207]: pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:09:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:28 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:28.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:28.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:28.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:28.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:28.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:28.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:29 compute-0 ceph-mon[74207]: pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:29 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2359044906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:29 compute-0 nova_compute[253512]: 2025-11-25 10:09:29.951 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:09:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:09:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:09:30] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:09:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:09:30] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:09:30 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:30 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/906682007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:09:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:30.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:30.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:31 compute-0 nova_compute[253512]: 2025-11-25 10:09:31.325 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:31 compute-0 nova_compute[253512]: 2025-11-25 10:09:31.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:31 compute-0 nova_compute[253512]: 2025-11-25 10:09:31.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:31 compute-0 nova_compute[253512]: 2025-11-25 10:09:31.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:31 compute-0 nova_compute[253512]: 2025-11-25 10:09:31.491 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:09:31 compute-0 nova_compute[253512]: 2025-11-25 10:09:31.491 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:09:31 compute-0 nova_compute[253512]: 2025-11-25 10:09:31.492 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:09:31 compute-0 nova_compute[253512]: 2025-11-25 10:09:31.492 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:09:31 compute-0 nova_compute[253512]: 2025-11-25 10:09:31.492 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:09:31 compute-0 ceph-mon[74207]: pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:31 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:09:31 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117030415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:31 compute-0 nova_compute[253512]: 2025-11-25 10:09:31.830 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:09:32 compute-0 nova_compute[253512]: 2025-11-25 10:09:32.029 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:09:32 compute-0 nova_compute[253512]: 2025-11-25 10:09:32.031 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4558MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:09:32 compute-0 nova_compute[253512]: 2025-11-25 10:09:32.031 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:09:32 compute-0 nova_compute[253512]: 2025-11-25 10:09:32.032 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:09:32 compute-0 nova_compute[253512]: 2025-11-25 10:09:32.075 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:09:32 compute-0 nova_compute[253512]: 2025-11-25 10:09:32.075 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:09:32 compute-0 nova_compute[253512]: 2025-11-25 10:09:32.088 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:09:32 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:09:32 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:09:32 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2986840769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:32 compute-0 nova_compute[253512]: 2025-11-25 10:09:32.431 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:09:32 compute-0 nova_compute[253512]: 2025-11-25 10:09:32.435 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:09:32 compute-0 nova_compute[253512]: 2025-11-25 10:09:32.456 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:09:32 compute-0 nova_compute[253512]: 2025-11-25 10:09:32.457 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:09:32 compute-0 nova_compute[253512]: 2025-11-25 10:09:32.458 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:09:32 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3117030415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:32 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2986840769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:32.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:32.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:33 compute-0 nova_compute[253512]: 2025-11-25 10:09:33.453 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:33 compute-0 nova_compute[253512]: 2025-11-25 10:09:33.454 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:33 compute-0 ceph-mon[74207]: pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:09:34 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:34 compute-0 nova_compute[253512]: 2025-11-25 10:09:34.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:34.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:34 compute-0 nova_compute[253512]: 2025-11-25 10:09:34.953 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:34.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:35 compute-0 ceph-mon[74207]: pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:35 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1967994406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:36 compute-0 nova_compute[253512]: 2025-11-25 10:09:36.328 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:36 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:36 compute-0 nova_compute[253512]: 2025-11-25 10:09:36.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:36 compute-0 nova_compute[253512]: 2025-11-25 10:09:36.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:09:36 compute-0 nova_compute[253512]: 2025-11-25 10:09:36.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:09:36 compute-0 nova_compute[253512]: 2025-11-25 10:09:36.482 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:09:36 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2342457185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:09:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:36.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:36.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:37.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:37.126Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:37.127Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:37.127Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:37 compute-0 nova_compute[253512]: 2025-11-25 10:09:37.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:37 compute-0 nova_compute[253512]: 2025-11-25 10:09:37.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:09:37 compute-0 ceph-mon[74207]: pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:38 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:38.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:38.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:38.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:38.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:38.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:38.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:39 compute-0 ceph-mon[74207]: pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:39 compute-0 nova_compute[253512]: 2025-11-25 10:09:39.954 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:09:40] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:09:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:09:40] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Nov 25 10:09:40 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:40.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:40.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:41 compute-0 nova_compute[253512]: 2025-11-25 10:09:41.329 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:41 compute-0 nova_compute[253512]: 2025-11-25 10:09:41.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:09:41 compute-0 ceph-mon[74207]: pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:42 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:42.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:42.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:43 compute-0 ceph-mon[74207]: pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:43 compute-0 podman[286498]: 2025-11-25 10:09:43.975607084 +0000 UTC m=+0.039760252 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:09:44 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:44 compute-0 sudo[286515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:09:44 compute-0 sudo[286515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:09:44 compute-0 sudo[286515]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:44.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:44 compute-0 nova_compute[253512]: 2025-11-25 10:09:44.955 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_10:09:44
Nov 25 10:09:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:09:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 10:09:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'images', 'backups', '.nfs', 'volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 25 10:09:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 10:09:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:09:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:09:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:44.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:09:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:09:45 compute-0 ceph-mon[74207]: pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:09:46 compute-0 nova_compute[253512]: 2025-11-25 10:09:46.331 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:46 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:46.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:46.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:47.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:47.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:47.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:47.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:47 compute-0 ceph-mon[74207]: pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:48 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:48.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:48.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:48.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:48.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:48.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:49 compute-0 ceph-mon[74207]: pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:49 compute-0 nova_compute[253512]: 2025-11-25 10:09:49.956 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:49 compute-0 podman[286545]: 2025-11-25 10:09:49.989676427 +0000 UTC m=+0.053688003 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:09:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:09:50] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:09:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:09:50] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:09:50 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:50.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:51.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:51 compute-0 nova_compute[253512]: 2025-11-25 10:09:51.334 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:51 compute-0 ceph-mon[74207]: pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:52 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:52 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:52 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:52 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:52.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:53.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:53 compute-0 ceph-mon[74207]: pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:54 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1241217115' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:09:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1241217115' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:09:54 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:54 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:54 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:54.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:54 compute-0 nova_compute[253512]: 2025-11-25 10:09:54.957 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:55.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:09:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:09:55 compute-0 ceph-mon[74207]: pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:56 compute-0 nova_compute[253512]: 2025-11-25 10:09:56.335 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:56 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:56 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:56 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:09:56 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:56.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:09:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:57.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:57.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:57.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:57.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:57.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:57 compute-0 ceph-mon[74207]: pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:09:58 compute-0 podman[286577]: 2025-11-25 10:09:58.004596467 +0000 UTC m=+0.065744176 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:09:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:09:58 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:58.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:58.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:58.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:09:58.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:09:58 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:58 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:58 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:58.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:09:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:09:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:59.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:09:59 compute-0 ceph-mon[74207]: pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:09:59 compute-0 nova_compute[253512]: 2025-11-25 10:09:59.957 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:09:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:09:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:10:00 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 failed cephadm daemon(s)
Nov 25 10:10:00 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Nov 25 10:10:00 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.2.0.compute-0.rychik on compute-0 is in error state
Nov 25 10:10:00 compute-0 ceph-mon[74207]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.yfzsxe on compute-1 is in error state
Nov 25 10:10:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:10:00] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Nov 25 10:10:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:10:00] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Nov 25 10:10:00 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:10:00 compute-0 ceph-mon[74207]: Health detail: HEALTH_WARN 2 failed cephadm daemon(s)
Nov 25 10:10:00 compute-0 ceph-mon[74207]: [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Nov 25 10:10:00 compute-0 ceph-mon[74207]:     daemon nfs.cephfs.2.0.compute-0.rychik on compute-0 is in error state
Nov 25 10:10:00 compute-0 ceph-mon[74207]:     daemon nfs.cephfs.0.0.compute-1.yfzsxe on compute-1 is in error state
Nov 25 10:10:00 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:00 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.003000029s ======
Nov 25 10:10:00 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:00.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000029s
Nov 25 10:10:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:01.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:01 compute-0 nova_compute[253512]: 2025-11-25 10:10:01.337 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:01 compute-0 ceph-mon[74207]: pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:01 compute-0 sudo[286598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:10:01 compute-0 sudo[286598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:01 compute-0 sudo[286598]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:01 compute-0 sudo[286623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:10:01 compute-0 sudo[286623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:02 compute-0 sudo[286623]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:10:02 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:10:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 10:10:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:10:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 10:10:02 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Nov 25 10:10:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:10:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 10:10:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:10:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 10:10:02 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:10:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 10:10:02 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:10:02 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:10:02 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:10:02 compute-0 sudo[286678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:10:02 compute-0 sudo[286678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:02 compute-0 sudo[286678]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:02 compute-0 sudo[286703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 10:10:02 compute-0 sudo[286703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:10:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:10:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:10:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:10:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:10:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:10:02 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:10:02 compute-0 podman[286759]: 2025-11-25 10:10:02.782611479 +0000 UTC m=+0.031526814 container create df082a3677a3b829320f3546d0ccd143725666fc88832c6ebcb1fdd583139e65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_hopper, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 10:10:02 compute-0 systemd[1]: Started libpod-conmon-df082a3677a3b829320f3546d0ccd143725666fc88832c6ebcb1fdd583139e65.scope.
Nov 25 10:10:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:10:02 compute-0 podman[286759]: 2025-11-25 10:10:02.837785983 +0000 UTC m=+0.086701337 container init df082a3677a3b829320f3546d0ccd143725666fc88832c6ebcb1fdd583139e65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 10:10:02 compute-0 podman[286759]: 2025-11-25 10:10:02.843474906 +0000 UTC m=+0.092390239 container start df082a3677a3b829320f3546d0ccd143725666fc88832c6ebcb1fdd583139e65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_hopper, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:10:02 compute-0 podman[286759]: 2025-11-25 10:10:02.844655781 +0000 UTC m=+0.093571115 container attach df082a3677a3b829320f3546d0ccd143725666fc88832c6ebcb1fdd583139e65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_hopper, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 25 10:10:02 compute-0 fervent_hopper[286772]: 167 167
Nov 25 10:10:02 compute-0 systemd[1]: libpod-df082a3677a3b829320f3546d0ccd143725666fc88832c6ebcb1fdd583139e65.scope: Deactivated successfully.
Nov 25 10:10:02 compute-0 conmon[286772]: conmon df082a3677a3b829320f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df082a3677a3b829320f3546d0ccd143725666fc88832c6ebcb1fdd583139e65.scope/container/memory.events
Nov 25 10:10:02 compute-0 podman[286759]: 2025-11-25 10:10:02.849167454 +0000 UTC m=+0.098082788 container died df082a3677a3b829320f3546d0ccd143725666fc88832c6ebcb1fdd583139e65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8792a6f65c2a9d883a651c8f7b82b8ce8507a83214e2c21372acd38de3dc4bb-merged.mount: Deactivated successfully.
Nov 25 10:10:02 compute-0 podman[286759]: 2025-11-25 10:10:02.770645004 +0000 UTC m=+0.019560348 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:10:02 compute-0 podman[286759]: 2025-11-25 10:10:02.86872685 +0000 UTC m=+0.117642175 container remove df082a3677a3b829320f3546d0ccd143725666fc88832c6ebcb1fdd583139e65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_hopper, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 10:10:02 compute-0 systemd[1]: libpod-conmon-df082a3677a3b829320f3546d0ccd143725666fc88832c6ebcb1fdd583139e65.scope: Deactivated successfully.
Nov 25 10:10:02 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:02 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:10:02 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:02.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:10:02 compute-0 podman[286794]: 2025-11-25 10:10:02.990589177 +0000 UTC m=+0.029828852 container create db631e71b38edfd7cc55b39e215704d18033e96967cabc2924f9a63b08b64593 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_mccarthy, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:10:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:03.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:03 compute-0 systemd[1]: Started libpod-conmon-db631e71b38edfd7cc55b39e215704d18033e96967cabc2924f9a63b08b64593.scope.
Nov 25 10:10:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df76611ad13bcd005cdc5180730830e4b1fc0770f5b817692c78159d792103ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df76611ad13bcd005cdc5180730830e4b1fc0770f5b817692c78159d792103ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df76611ad13bcd005cdc5180730830e4b1fc0770f5b817692c78159d792103ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df76611ad13bcd005cdc5180730830e4b1fc0770f5b817692c78159d792103ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df76611ad13bcd005cdc5180730830e4b1fc0770f5b817692c78159d792103ab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:03 compute-0 podman[286794]: 2025-11-25 10:10:03.05219291 +0000 UTC m=+0.091432574 container init db631e71b38edfd7cc55b39e215704d18033e96967cabc2924f9a63b08b64593 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:10:03 compute-0 podman[286794]: 2025-11-25 10:10:03.057643664 +0000 UTC m=+0.096883328 container start db631e71b38edfd7cc55b39e215704d18033e96967cabc2924f9a63b08b64593 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_mccarthy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 10:10:03 compute-0 podman[286794]: 2025-11-25 10:10:03.058711025 +0000 UTC m=+0.097950689 container attach db631e71b38edfd7cc55b39e215704d18033e96967cabc2924f9a63b08b64593 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_mccarthy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 25 10:10:03 compute-0 podman[286794]: 2025-11-25 10:10:02.979003472 +0000 UTC m=+0.018243146 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:10:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:03 compute-0 hardcore_mccarthy[286807]: --> passed data devices: 0 physical, 1 LVM
Nov 25 10:10:03 compute-0 hardcore_mccarthy[286807]: --> All data devices are unavailable
Nov 25 10:10:03 compute-0 systemd[1]: libpod-db631e71b38edfd7cc55b39e215704d18033e96967cabc2924f9a63b08b64593.scope: Deactivated successfully.
Nov 25 10:10:03 compute-0 conmon[286807]: conmon db631e71b38edfd7cc55 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db631e71b38edfd7cc55b39e215704d18033e96967cabc2924f9a63b08b64593.scope/container/memory.events
Nov 25 10:10:03 compute-0 podman[286794]: 2025-11-25 10:10:03.325665544 +0000 UTC m=+0.364905208 container died db631e71b38edfd7cc55b39e215704d18033e96967cabc2924f9a63b08b64593 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_mccarthy, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:10:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-df76611ad13bcd005cdc5180730830e4b1fc0770f5b817692c78159d792103ab-merged.mount: Deactivated successfully.
Nov 25 10:10:03 compute-0 podman[286794]: 2025-11-25 10:10:03.348048612 +0000 UTC m=+0.387288276 container remove db631e71b38edfd7cc55b39e215704d18033e96967cabc2924f9a63b08b64593 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_mccarthy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 10:10:03 compute-0 systemd[1]: libpod-conmon-db631e71b38edfd7cc55b39e215704d18033e96967cabc2924f9a63b08b64593.scope: Deactivated successfully.
Nov 25 10:10:03 compute-0 sudo[286703]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:03 compute-0 sudo[286832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:10:03 compute-0 sudo[286832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:03 compute-0 sudo[286832]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:03 compute-0 sudo[286857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 10:10:03 compute-0 sudo[286857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:03 compute-0 ceph-mon[74207]: pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Nov 25 10:10:03 compute-0 podman[286915]: 2025-11-25 10:10:03.758292485 +0000 UTC m=+0.029644385 container create 0b461bdce91a79d628b98687ffe7eaade68f7db0a25b9a61f97ecfaa9e0112d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:10:03 compute-0 systemd[1]: Started libpod-conmon-0b461bdce91a79d628b98687ffe7eaade68f7db0a25b9a61f97ecfaa9e0112d4.scope.
Nov 25 10:10:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:10:03 compute-0 podman[286915]: 2025-11-25 10:10:03.812308836 +0000 UTC m=+0.083660726 container init 0b461bdce91a79d628b98687ffe7eaade68f7db0a25b9a61f97ecfaa9e0112d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ritchie, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:10:03 compute-0 podman[286915]: 2025-11-25 10:10:03.818380951 +0000 UTC m=+0.089732841 container start 0b461bdce91a79d628b98687ffe7eaade68f7db0a25b9a61f97ecfaa9e0112d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:10:03 compute-0 podman[286915]: 2025-11-25 10:10:03.819789886 +0000 UTC m=+0.091141776 container attach 0b461bdce91a79d628b98687ffe7eaade68f7db0a25b9a61f97ecfaa9e0112d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ritchie, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:10:03 compute-0 interesting_ritchie[286929]: 167 167
Nov 25 10:10:03 compute-0 systemd[1]: libpod-0b461bdce91a79d628b98687ffe7eaade68f7db0a25b9a61f97ecfaa9e0112d4.scope: Deactivated successfully.
Nov 25 10:10:03 compute-0 podman[286915]: 2025-11-25 10:10:03.821877611 +0000 UTC m=+0.093229501 container died 0b461bdce91a79d628b98687ffe7eaade68f7db0a25b9a61f97ecfaa9e0112d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ritchie, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 10:10:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b45b1992694ce66b964de46fcc4b76121e5718f6d73f5d0d2166b2cc520ca39-merged.mount: Deactivated successfully.
Nov 25 10:10:03 compute-0 podman[286915]: 2025-11-25 10:10:03.839944564 +0000 UTC m=+0.111296444 container remove 0b461bdce91a79d628b98687ffe7eaade68f7db0a25b9a61f97ecfaa9e0112d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ritchie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:10:03 compute-0 podman[286915]: 2025-11-25 10:10:03.747321486 +0000 UTC m=+0.018673396 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:10:03 compute-0 systemd[1]: libpod-conmon-0b461bdce91a79d628b98687ffe7eaade68f7db0a25b9a61f97ecfaa9e0112d4.scope: Deactivated successfully.
Nov 25 10:10:03 compute-0 podman[286950]: 2025-11-25 10:10:03.963250131 +0000 UTC m=+0.028878030 container create 78c1e530d85ffa61702de66bf9d437e493caecad1cb7a3667aabb5a03c601139 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:10:03 compute-0 systemd[1]: Started libpod-conmon-78c1e530d85ffa61702de66bf9d437e493caecad1cb7a3667aabb5a03c601139.scope.
Nov 25 10:10:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c54078adaa86912a4b2ee53324aaf2c87ab4a253ba1b2387b074bf7377b4699/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c54078adaa86912a4b2ee53324aaf2c87ab4a253ba1b2387b074bf7377b4699/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c54078adaa86912a4b2ee53324aaf2c87ab4a253ba1b2387b074bf7377b4699/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c54078adaa86912a4b2ee53324aaf2c87ab4a253ba1b2387b074bf7377b4699/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:04 compute-0 podman[286950]: 2025-11-25 10:10:04.027106438 +0000 UTC m=+0.092734328 container init 78c1e530d85ffa61702de66bf9d437e493caecad1cb7a3667aabb5a03c601139 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_cohen, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:10:04 compute-0 podman[286950]: 2025-11-25 10:10:04.032701484 +0000 UTC m=+0.098329373 container start 78c1e530d85ffa61702de66bf9d437e493caecad1cb7a3667aabb5a03c601139 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:10:04 compute-0 podman[286950]: 2025-11-25 10:10:04.033803472 +0000 UTC m=+0.099431361 container attach 78c1e530d85ffa61702de66bf9d437e493caecad1cb7a3667aabb5a03c601139 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_cohen, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 10:10:04 compute-0 podman[286950]: 2025-11-25 10:10:03.951687348 +0000 UTC m=+0.017315258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:10:04 compute-0 cool_cohen[286964]: {
Nov 25 10:10:04 compute-0 cool_cohen[286964]:     "1": [
Nov 25 10:10:04 compute-0 cool_cohen[286964]:         {
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             "devices": [
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "/dev/loop3"
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             ],
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             "lv_name": "ceph_lv0",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             "lv_size": "21470642176",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             "name": "ceph_lv0",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             "tags": {
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.cluster_name": "ceph",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.crush_device_class": "",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.encrypted": "0",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.osd_id": "1",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.type": "block",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.vdo": "0",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:                 "ceph.with_tpm": "0"
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             },
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             "type": "block",
Nov 25 10:10:04 compute-0 cool_cohen[286964]:             "vg_name": "ceph_vg0"
Nov 25 10:10:04 compute-0 cool_cohen[286964]:         }
Nov 25 10:10:04 compute-0 cool_cohen[286964]:     ]
Nov 25 10:10:04 compute-0 cool_cohen[286964]: }
Nov 25 10:10:04 compute-0 systemd[1]: libpod-78c1e530d85ffa61702de66bf9d437e493caecad1cb7a3667aabb5a03c601139.scope: Deactivated successfully.
Nov 25 10:10:04 compute-0 podman[286950]: 2025-11-25 10:10:04.275557893 +0000 UTC m=+0.341185783 container died 78c1e530d85ffa61702de66bf9d437e493caecad1cb7a3667aabb5a03c601139 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_cohen, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:10:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c54078adaa86912a4b2ee53324aaf2c87ab4a253ba1b2387b074bf7377b4699-merged.mount: Deactivated successfully.
Nov 25 10:10:04 compute-0 podman[286950]: 2025-11-25 10:10:04.296096184 +0000 UTC m=+0.361724072 container remove 78c1e530d85ffa61702de66bf9d437e493caecad1cb7a3667aabb5a03c601139 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 25 10:10:04 compute-0 systemd[1]: libpod-conmon-78c1e530d85ffa61702de66bf9d437e493caecad1cb7a3667aabb5a03c601139.scope: Deactivated successfully.
Nov 25 10:10:04 compute-0 sudo[286857]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:04 compute-0 sudo[286983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:10:04 compute-0 sudo[286983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:04 compute-0 sudo[286983]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:04 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Nov 25 10:10:04 compute-0 sudo[287008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 10:10:04 compute-0 sudo[287008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:04 compute-0 sudo[287033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:10:04 compute-0 sudo[287033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:04 compute-0 sudo[287033]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:04 compute-0 podman[287088]: 2025-11-25 10:10:04.714593763 +0000 UTC m=+0.029357063 container create 7466a84a9756718d0502a3c32d1c00e76a7d974a886ff241c9710397db407af0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elion, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 25 10:10:04 compute-0 systemd[1]: Started libpod-conmon-7466a84a9756718d0502a3c32d1c00e76a7d974a886ff241c9710397db407af0.scope.
Nov 25 10:10:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:10:04 compute-0 podman[287088]: 2025-11-25 10:10:04.763549716 +0000 UTC m=+0.078313036 container init 7466a84a9756718d0502a3c32d1c00e76a7d974a886ff241c9710397db407af0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elion, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:10:04 compute-0 podman[287088]: 2025-11-25 10:10:04.768481922 +0000 UTC m=+0.083245222 container start 7466a84a9756718d0502a3c32d1c00e76a7d974a886ff241c9710397db407af0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elion, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 10:10:04 compute-0 podman[287088]: 2025-11-25 10:10:04.769908912 +0000 UTC m=+0.084672212 container attach 7466a84a9756718d0502a3c32d1c00e76a7d974a886ff241c9710397db407af0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 25 10:10:04 compute-0 mystifying_elion[287101]: 167 167
Nov 25 10:10:04 compute-0 systemd[1]: libpod-7466a84a9756718d0502a3c32d1c00e76a7d974a886ff241c9710397db407af0.scope: Deactivated successfully.
Nov 25 10:10:04 compute-0 podman[287088]: 2025-11-25 10:10:04.772616856 +0000 UTC m=+0.087380155 container died 7466a84a9756718d0502a3c32d1c00e76a7d974a886ff241c9710397db407af0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:10:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1095d413651a2e81640f464d30d598e1d5424659e7856fae65e7ee32576919d-merged.mount: Deactivated successfully.
Nov 25 10:10:04 compute-0 podman[287088]: 2025-11-25 10:10:04.790831468 +0000 UTC m=+0.105594767 container remove 7466a84a9756718d0502a3c32d1c00e76a7d974a886ff241c9710397db407af0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:10:04 compute-0 podman[287088]: 2025-11-25 10:10:04.702391524 +0000 UTC m=+0.017154845 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:10:04 compute-0 systemd[1]: libpod-conmon-7466a84a9756718d0502a3c32d1c00e76a7d974a886ff241c9710397db407af0.scope: Deactivated successfully.
Nov 25 10:10:04 compute-0 podman[287123]: 2025-11-25 10:10:04.912476495 +0000 UTC m=+0.029752298 container create 5ba469a05315a384eb207d183dcd25b5a86ec562b8f6319a6a9f0d96eeb2ae59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_euclid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:10:04 compute-0 systemd[1]: Started libpod-conmon-5ba469a05315a384eb207d183dcd25b5a86ec562b8f6319a6a9f0d96eeb2ae59.scope.
Nov 25 10:10:04 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:04 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:04 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:04.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:10:04 compute-0 nova_compute[253512]: 2025-11-25 10:10:04.958 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81abcd3ee9fede87a97934f6f5dc3cf9dc6b50bfebbfebdf9506be4ac57a5ba5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81abcd3ee9fede87a97934f6f5dc3cf9dc6b50bfebbfebdf9506be4ac57a5ba5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81abcd3ee9fede87a97934f6f5dc3cf9dc6b50bfebbfebdf9506be4ac57a5ba5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81abcd3ee9fede87a97934f6f5dc3cf9dc6b50bfebbfebdf9506be4ac57a5ba5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:10:04 compute-0 podman[287123]: 2025-11-25 10:10:04.971268507 +0000 UTC m=+0.088544300 container init 5ba469a05315a384eb207d183dcd25b5a86ec562b8f6319a6a9f0d96eeb2ae59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_euclid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 10:10:04 compute-0 podman[287123]: 2025-11-25 10:10:04.976507762 +0000 UTC m=+0.093783555 container start 5ba469a05315a384eb207d183dcd25b5a86ec562b8f6319a6a9f0d96eeb2ae59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:10:04 compute-0 podman[287123]: 2025-11-25 10:10:04.98064507 +0000 UTC m=+0.097920863 container attach 5ba469a05315a384eb207d183dcd25b5a86ec562b8f6319a6a9f0d96eeb2ae59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_euclid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:10:04 compute-0 podman[287123]: 2025-11-25 10:10:04.901063444 +0000 UTC m=+0.018339237 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:10:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:05.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:10:05.395 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:10:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:10:05.396 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:10:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:10:05.396 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:10:05 compute-0 sweet_euclid[287137]: {}
Nov 25 10:10:05 compute-0 lvm[287214]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:10:05 compute-0 lvm[287214]: VG ceph_vg0 finished
Nov 25 10:10:05 compute-0 systemd[1]: libpod-5ba469a05315a384eb207d183dcd25b5a86ec562b8f6319a6a9f0d96eeb2ae59.scope: Deactivated successfully.
Nov 25 10:10:05 compute-0 podman[287123]: 2025-11-25 10:10:05.47826999 +0000 UTC m=+0.595545783 container died 5ba469a05315a384eb207d183dcd25b5a86ec562b8f6319a6a9f0d96eeb2ae59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_euclid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-81abcd3ee9fede87a97934f6f5dc3cf9dc6b50bfebbfebdf9506be4ac57a5ba5-merged.mount: Deactivated successfully.
Nov 25 10:10:05 compute-0 podman[287123]: 2025-11-25 10:10:05.502690337 +0000 UTC m=+0.619966130 container remove 5ba469a05315a384eb207d183dcd25b5a86ec562b8f6319a6a9f0d96eeb2ae59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 10:10:05 compute-0 systemd[1]: libpod-conmon-5ba469a05315a384eb207d183dcd25b5a86ec562b8f6319a6a9f0d96eeb2ae59.scope: Deactivated successfully.
Nov 25 10:10:05 compute-0 sudo[287008]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:10:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:10:05 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:10:05 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:10:05 compute-0 sudo[287226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:10:05 compute-0 sudo[287226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:05 compute-0 sudo[287226]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:05 compute-0 ceph-mon[74207]: pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Nov 25 10:10:05 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:10:05 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:10:06 compute-0 nova_compute[253512]: 2025-11-25 10:10:06.340 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:06 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Nov 25 10:10:06 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:06 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:06 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:06.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:07.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:07.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:07.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:07.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:07.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:07 compute-0 ceph-mon[74207]: pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Nov 25 10:10:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:08 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Nov 25 10:10:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:08.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:08.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:08.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:08.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:08 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:08 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:08 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:08.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:09.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:09 compute-0 ceph-mon[74207]: pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Nov 25 10:10:09 compute-0 nova_compute[253512]: 2025-11-25 10:10:09.960 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:10:10] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Nov 25 10:10:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:10:10] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Nov 25 10:10:10 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Nov 25 10:10:10 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:10 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:10 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:10.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:11.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:11 compute-0 nova_compute[253512]: 2025-11-25 10:10:11.343 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:11 compute-0 ceph-mon[74207]: pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Nov 25 10:10:12 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Nov 25 10:10:12 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:12 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:12 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:12.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:13.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:13 compute-0 ceph-mon[74207]: pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Nov 25 10:10:14 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:14 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:14 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:14 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:14.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:14 compute-0 nova_compute[253512]: 2025-11-25 10:10:14.960 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:14 compute-0 podman[287261]: 2025-11-25 10:10:14.975440323 +0000 UTC m=+0.038048977 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 10:10:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:10:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:10:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:10:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:10:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:10:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:10:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:10:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:10:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:15.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:15 compute-0 ceph-mon[74207]: pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:10:16 compute-0 nova_compute[253512]: 2025-11-25 10:10:16.345 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:16 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:16 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:16 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:10:16 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:16.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:10:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:17.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:17.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:17.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:17.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:17.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:17 compute-0 ceph-mon[74207]: pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:18 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:18.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:18.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:18.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:18.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:18 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:18 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:18 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:18.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:19.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:19 compute-0 ceph-mon[74207]: pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:19 compute-0 nova_compute[253512]: 2025-11-25 10:10:19.961 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:10:20] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:10:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:10:20] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Nov 25 10:10:20 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:20 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:20 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:20 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:20.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:20 compute-0 podman[287284]: 2025-11-25 10:10:20.989057013 +0000 UTC m=+0.053290242 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 10:10:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:21.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:21 compute-0 nova_compute[253512]: 2025-11-25 10:10:21.346 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:21 compute-0 ceph-mon[74207]: pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:22 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:22 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:22 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:22 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:22.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:23.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:23 compute-0 ceph-mon[74207]: pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:24 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:24 compute-0 sudo[287311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:10:24 compute-0 sudo[287311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:24 compute-0 sudo[287311]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:24 compute-0 nova_compute[253512]: 2025-11-25 10:10:24.962 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:24 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:24 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:24 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:24.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:25.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:25 compute-0 ceph-mon[74207]: pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:26 compute-0 nova_compute[253512]: 2025-11-25 10:10:26.350 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:26 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:26 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:26 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:26 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:26.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:27.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:27.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:27.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:27.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:27.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:27 compute-0 ceph-mon[74207]: pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:28 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:28 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:28.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:28.898Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:28.898Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:28 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:28.898Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:28 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:28 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:10:28 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:28.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:10:28 compute-0 podman[287340]: 2025-11-25 10:10:28.9847262 +0000 UTC m=+0.046734877 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:10:29 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:29 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:29 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:29.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:29 compute-0 ceph-mon[74207]: pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:29 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1933998820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:29 compute-0 nova_compute[253512]: 2025-11-25 10:10:29.963 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:29 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:10:29 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:10:30 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:10:30] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:10:30 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:10:30] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:10:30 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:30 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1706743548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:30 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:10:30 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:30 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:30 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:30.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:31 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:31 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:31 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:31.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:31 compute-0 nova_compute[253512]: 2025-11-25 10:10:31.356 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:31 compute-0 nova_compute[253512]: 2025-11-25 10:10:31.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:31 compute-0 ceph-mon[74207]: pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:32 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:32 compute-0 nova_compute[253512]: 2025-11-25 10:10:32.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:32 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:32 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:32 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:32.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:33 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:33 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:33 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:33.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:33 compute-0 nova_compute[253512]: 2025-11-25 10:10:33.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:33 compute-0 nova_compute[253512]: 2025-11-25 10:10:33.488 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:10:33 compute-0 nova_compute[253512]: 2025-11-25 10:10:33.489 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:10:33 compute-0 nova_compute[253512]: 2025-11-25 10:10:33.489 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:10:33 compute-0 nova_compute[253512]: 2025-11-25 10:10:33.489 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:10:33 compute-0 nova_compute[253512]: 2025-11-25 10:10:33.489 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:10:33 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:10:33 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/623444006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:33 compute-0 ceph-mon[74207]: pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:33 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/623444006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:33 compute-0 nova_compute[253512]: 2025-11-25 10:10:33.836 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.061 253516 WARNING nova.virt.libvirt.driver [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.063 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4564MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.063 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.063 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.117 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.117 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.128 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:10:34 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:34 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 10:10:34 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867604374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.519 253516 DEBUG oslo_concurrency.processutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.524 253516 DEBUG nova.compute.provider_tree [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed in ProviderTree for provider: d9873737-caae-40cc-9346-77a33537057c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.535 253516 DEBUG nova.scheduler.client.report [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Inventory has not changed for provider d9873737-caae-40cc-9346-77a33537057c based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.536 253516 DEBUG nova.compute.resource_tracker [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.536 253516 DEBUG oslo_concurrency.lockutils [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:10:34 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3867604374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:34 compute-0 nova_compute[253512]: 2025-11-25 10:10:34.964 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:34 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:34 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:34 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:34.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:35 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:35 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:35 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:35.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:35 compute-0 nova_compute[253512]: 2025-11-25 10:10:35.533 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:35 compute-0 nova_compute[253512]: 2025-11-25 10:10:35.533 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:35 compute-0 ceph-mon[74207]: pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:36 compute-0 nova_compute[253512]: 2025-11-25 10:10:36.357 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:36 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:10:36 compute-0 nova_compute[253512]: 2025-11-25 10:10:36.471 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:36 compute-0 nova_compute[253512]: 2025-11-25 10:10:36.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:10:36 compute-0 nova_compute[253512]: 2025-11-25 10:10:36.471 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:10:36 compute-0 nova_compute[253512]: 2025-11-25 10:10:36.480 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:10:36 compute-0 nova_compute[253512]: 2025-11-25 10:10:36.480 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:36 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2033946624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:36 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:36 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:36 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:36.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:37 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:37 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:37 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:37.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:37.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:37.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:37.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:37 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:37.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:37 compute-0 ceph-mon[74207]: pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:10:37 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1927464700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 10:10:38 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:38 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:38 compute-0 nova_compute[253512]: 2025-11-25 10:10:38.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:38 compute-0 nova_compute[253512]: 2025-11-25 10:10:38.472 253516 DEBUG nova.compute.manager [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:10:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:38.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:38.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:38.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:38 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:38.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:38 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:38 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 10:10:38 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:38.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 10:10:39 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:39 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:39 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:39.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:39 compute-0 ceph-mon[74207]: pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:39 compute-0 nova_compute[253512]: 2025-11-25 10:10:39.966 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:40 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:10:40] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:10:40 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:10:40] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Nov 25 10:10:40 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:40 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:40 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:40 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:40.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:41 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:41 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:41 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:41.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:41 compute-0 nova_compute[253512]: 2025-11-25 10:10:41.359 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:41 compute-0 nova_compute[253512]: 2025-11-25 10:10:41.472 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:41 compute-0 ceph-mon[74207]: pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:42 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:10:42 compute-0 nova_compute[253512]: 2025-11-25 10:10:42.467 253516 DEBUG oslo_service.periodic_task [None req-b845d3ac-64ad-4fbb-b6ab-0fd2a02e7531 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:10:42 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:42 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:42 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:42.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:43 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:43 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:43 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:43.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:43 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:43 compute-0 ceph-mon[74207]: pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:10:44 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:44 compute-0 sudo[287417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:10:44 compute-0 sudo[287417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:10:44 compute-0 sudo[287417]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:44 compute-0 nova_compute[253512]: 2025-11-25 10:10:44.968 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Optimize plan auto_2025-11-25_10:10:44
Nov 25 10:10:44 compute-0 ceph-mgr[74476]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:10:44 compute-0 ceph-mgr[74476]: [balancer INFO root] do_upmap
Nov 25 10:10:44 compute-0 ceph-mgr[74476]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'backups', '.nfs']
Nov 25 10:10:44 compute-0 ceph-mgr[74476]: [balancer INFO root] prepared 0/10 upmap changes
Nov 25 10:10:44 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:10:44 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:10:44 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:44 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:44 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:44.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:10:45 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:45 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:10:45 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:45.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:10:45 compute-0 ceph-mgr[74476]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:10:45 compute-0 ceph-mon[74207]: pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:45 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:10:45 compute-0 podman[287443]: 2025-11-25 10:10:45.98042779 +0000 UTC m=+0.039796860 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:10:46 compute-0 nova_compute[253512]: 2025-11-25 10:10:46.359 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:46 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:10:46 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:46 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:46 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:46.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:47 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:47 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:47 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:47.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:47.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:47.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:47.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:47 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:47.120Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:47 compute-0 ceph-mon[74207]: pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 25 10:10:48 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:48 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:48.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:48.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:48.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:48 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:48.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:48 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:48 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:48 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:48.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:49 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:49 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:49 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:49 compute-0 ceph-mon[74207]: pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:49 compute-0 nova_compute[253512]: 2025-11-25 10:10:49.969 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:50 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:10:50] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:10:50 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:10:50] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:10:50 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:50 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:50 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:50 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:50.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:51 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:51 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:51 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:51 compute-0 nova_compute[253512]: 2025-11-25 10:10:51.360 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:51 compute-0 ceph-mon[74207]: pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:51 compute-0 podman[287467]: 2025-11-25 10:10:51.995494823 +0000 UTC m=+0.059129669 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 10:10:52 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:52.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:53 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:53 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:53 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:53.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:53 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:53 compute-0 ceph-mon[74207]: pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 10:10:54 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1051001646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:10:54 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 10:10:54 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1051001646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:10:54 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1051001646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 10:10:54 compute-0 ceph-mon[74207]: from='client.? 192.168.122.10:0/1051001646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 10:10:54 compute-0 nova_compute[253512]: 2025-11-25 10:10:54.971 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:55.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:55 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:55 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:55 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:55.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:10:55 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:10:55 compute-0 ceph-mon[74207]: pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:56 compute-0 nova_compute[253512]: 2025-11-25 10:10:56.362 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:56 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:57.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:57 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:57 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:57 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:57.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:57.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:57.118Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:57.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:57 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:57.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:57 compute-0 ceph-mon[74207]: pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:10:58 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:10:58 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:58.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:58 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:58.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:59.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:59 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:10:59.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:10:59 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:10:59 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:10:59 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:59.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:10:59 compute-0 ceph-mon[74207]: pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:10:59 compute-0 nova_compute[253512]: 2025-11-25 10:10:59.972 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:10:59 compute-0 podman[287498]: 2025-11-25 10:10:59.973455111 +0000 UTC m=+0.039115077 container health_status 4d99f0f6de84615714441cc182071c0412249454601520b3a630daac0314343e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:10:59 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:10:59 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:11:00 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:11:00] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:11:00 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:11:00] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:11:00 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:00 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:11:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:01.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:01 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:01 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:01 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:01.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:01 compute-0 nova_compute[253512]: 2025-11-25 10:11:01.364 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:01 compute-0 sshd-session[287518]: Accepted publickey for zuul from 192.168.122.10 port 41132 ssh2: ECDSA SHA256:XEYKo3oFYudY6Nqhvu5xSntKhvKu8TJT9WLXTNnblq8
Nov 25 10:11:01 compute-0 systemd-logind[744]: New session 59 of user zuul.
Nov 25 10:11:01 compute-0 systemd[1]: Started Session 59 of User zuul.
Nov 25 10:11:01 compute-0 sshd-session[287518]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:11:01 compute-0 sudo[287522]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 25 10:11:01 compute-0 sudo[287522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:01 compute-0 ceph-mon[74207]: pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:02 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:11:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:03.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:03 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:03 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:03 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:03.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:03 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:11:03 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18864 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:03 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28595 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:03 compute-0 ceph-mon[74207]: pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:11:04 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28480 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:04 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18879 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:04 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28607 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:04 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:04 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28495 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:04 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 25 10:11:04 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192645614' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:11:04 compute-0 sudo[287745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:11:04 compute-0 sudo[287745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:04 compute-0 sudo[287745]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:04 compute-0 nova_compute[253512]: 2025-11-25 10:11:04.973 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:04 compute-0 ceph-mon[74207]: from='client.18864 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:04 compute-0 ceph-mon[74207]: from='client.28595 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/192645614' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:11:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1855548595' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:11:04 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1775628257' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:11:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:11:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:05.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:11:05 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:05 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:05 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:05.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:11:05.397 164791 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:11:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:11:05.397 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:11:05 compute-0 ovn_metadata_agent[164786]: 2025-11-25 10:11:05.397 164791 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:11:05 compute-0 sudo[287830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:11:05 compute-0 sudo[287830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:05 compute-0 sudo[287830]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:05 compute-0 sudo[287855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 25 10:11:05 compute-0 sudo[287855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:05 compute-0 ceph-mon[74207]: from='client.28480 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:05 compute-0 ceph-mon[74207]: from='client.18879 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:05 compute-0 ceph-mon[74207]: from='client.28607 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:05 compute-0 ceph-mon[74207]: pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:05 compute-0 ceph-mon[74207]: from='client.28495 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:06 compute-0 podman[287938]: 2025-11-25 10:11:06.220665993 +0000 UTC m=+0.044459619 container exec f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 10:11:06 compute-0 podman[287938]: 2025-11-25 10:11:06.303170739 +0000 UTC m=+0.126964365 container exec_died f4319dd179814545640e64fdfcac1db77b2823abcbdcb2b1a114369d4864994d (image=quay.io/ceph/ceph:v19, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 25 10:11:06 compute-0 nova_compute[253512]: 2025-11-25 10:11:06.365 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:06 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:11:06 compute-0 podman[288051]: 2025-11-25 10:11:06.674495156 +0000 UTC m=+0.037686142 container exec e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:11:06 compute-0 podman[288051]: 2025-11-25 10:11:06.683102899 +0000 UTC m=+0.046293886 container exec_died e3abe27f278418218cb5f7470cd5d3397a8fee103f97aeb872e8458ba13d6ef5 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:11:06 compute-0 podman[288122]: 2025-11-25 10:11:06.88907043 +0000 UTC m=+0.037036557 container exec 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:11:06 compute-0 podman[288122]: 2025-11-25 10:11:06.910977651 +0000 UTC m=+0.058943758 container exec_died 7d9019b3aee322b2ee107252f1be6572b69294b6c6017f1cc21d1755afbd4218 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:11:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:07 compute-0 podman[288202]: 2025-11-25 10:11:07.073240707 +0000 UTC m=+0.038996423 container exec c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 10:11:07 compute-0 ovs-vsctl[288222]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 10:11:07 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:07 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:07 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:07.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:07.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:07.120Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:07.120Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:07 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:07.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:07 compute-0 podman[288202]: 2025-11-25 10:11:07.229251898 +0000 UTC m=+0.195007613 container exec_died c3bda6516cc366ad6c796070a0d9baad2f2fe6c4fc0eea9580e16af9efa6d907 (image=quay.io/ceph/grafana:10.4.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 25 10:11:07 compute-0 podman[288300]: 2025-11-25 10:11:07.375419688 +0000 UTC m=+0.035664152 container exec e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 10:11:07 compute-0 podman[288300]: 2025-11-25 10:11:07.386139923 +0000 UTC m=+0.046384397 container exec_died e42161b5a4203d144f1ea674b61ad86c2cd34158003d05504bb2eb346b4dc2bd (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-0-jgcdmc)
Nov 25 10:11:07 compute-0 podman[288358]: 2025-11-25 10:11:07.52992967 +0000 UTC m=+0.036320960 container exec 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, version=2.2.4, description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, release=1793)
Nov 25 10:11:07 compute-0 podman[288358]: 2025-11-25 10:11:07.540070934 +0000 UTC m=+0.046462223 container exec_died 22d20702fe707735a3addd7103370a3e1d2755162215ffc1eb75e650ddee9db2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-0-ulmpfs, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, name=keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793)
Nov 25 10:11:07 compute-0 podman[288456]: 2025-11-25 10:11:07.706303772 +0000 UTC m=+0.042202806 container exec 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:11:07 compute-0 virtqemud[252911]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 10:11:07 compute-0 virtqemud[252911]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 10:11:07 compute-0 podman[288513]: 2025-11-25 10:11:07.785018354 +0000 UTC m=+0.046939113 container exec_died 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:11:07 compute-0 podman[288456]: 2025-11-25 10:11:07.788217334 +0000 UTC m=+0.124116368 container exec_died 38f8efc838e7d2324e2dc5dda0bfb47a1a257005ded6829787773973cdd7e3fd (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:11:07 compute-0 virtqemud[252911]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 10:11:08 compute-0 ceph-mon[74207]: pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:11:08 compute-0 sudo[287855]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:11:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:11:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:11:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:08 compute-0 sudo[288684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:11:08 compute-0 sudo[288684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:08 compute-0 sudo[288684]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:08 compute-0 sudo[288733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 25 10:11:08 compute-0 sudo[288733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:08 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: cache status {prefix=cache status} (starting...)
Nov 25 10:11:08 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:11:08 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:08 compute-0 lvm[288812]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:11:08 compute-0 lvm[288812]: VG ceph_vg0 finished
Nov 25 10:11:08 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: client ls {prefix=client ls} (starting...)
Nov 25 10:11:08 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:11:08 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28628 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:08 compute-0 sudo[288733]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:11:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 25 10:11:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:11:08 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 25 10:11:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 25 10:11:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:08.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:08.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:08.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:08 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:08.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 25 10:11:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:11:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 25 10:11:08 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:11:08 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:11:08 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 25 10:11:09 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:09.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18912 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28528 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:09 compute-0 sudo[288969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:11:09 compute-0 sudo[288969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:09 compute-0 sudo[288969]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:09 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:09 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:09 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:09.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:09 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 10:11:09 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28534 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:09 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:09 compute-0 ceph-mon[74207]: pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:09 compute-0 ceph-mon[74207]: from='client.28628 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mon[74207]: pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:09 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:09 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:09 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4041974016' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mon[74207]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 25 10:11:09 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1364061829' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:09 compute-0 sudo[288998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 25 10:11:09 compute-0 sudo[288998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:09 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 10:11:09 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 25 10:11:09 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 10:11:09 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28673 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18942 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28679 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 10:11:09 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2335620262' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 10:11:09 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 10:11:09 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-0 podman[289156]: 2025-11-25 10:11:09.723459241 +0000 UTC m=+0.045468670 container create 1491b8fac1c122e0dd648c28a87197d597b6474d6588a4ecdc0ee23b833206d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_curie, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 10:11:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28703 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:09 compute-0 systemd[1]: Started libpod-conmon-1491b8fac1c122e0dd648c28a87197d597b6474d6588a4ecdc0ee23b833206d3.scope.
Nov 25 10:11:09 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 10:11:09 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:11:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28709 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:11:09 compute-0 podman[289156]: 2025-11-25 10:11:09.700561053 +0000 UTC m=+0.022570482 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:11:09 compute-0 podman[289156]: 2025-11-25 10:11:09.797257407 +0000 UTC m=+0.119266846 container init 1491b8fac1c122e0dd648c28a87197d597b6474d6588a4ecdc0ee23b833206d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:11:09 compute-0 podman[289156]: 2025-11-25 10:11:09.817126767 +0000 UTC m=+0.139136197 container start 1491b8fac1c122e0dd648c28a87197d597b6474d6588a4ecdc0ee23b833206d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:11:09 compute-0 podman[289156]: 2025-11-25 10:11:09.818328001 +0000 UTC m=+0.140337450 container attach 1491b8fac1c122e0dd648c28a87197d597b6474d6588a4ecdc0ee23b833206d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_curie, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 10:11:09 compute-0 systemd[1]: libpod-1491b8fac1c122e0dd648c28a87197d597b6474d6588a4ecdc0ee23b833206d3.scope: Deactivated successfully.
Nov 25 10:11:09 compute-0 musing_curie[289171]: 167 167
Nov 25 10:11:09 compute-0 conmon[289171]: conmon 1491b8fac1c122e0dd64 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1491b8fac1c122e0dd648c28a87197d597b6474d6588a4ecdc0ee23b833206d3.scope/container/memory.events
Nov 25 10:11:09 compute-0 podman[289156]: 2025-11-25 10:11:09.822940435 +0000 UTC m=+0.144949884 container died 1491b8fac1c122e0dd648c28a87197d597b6474d6588a4ecdc0ee23b833206d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_curie, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 10:11:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-27015b21c3feccf686e0c7869ba0acdb171591d742d5cb66c8d887eeefd21663-merged.mount: Deactivated successfully.
Nov 25 10:11:09 compute-0 podman[289156]: 2025-11-25 10:11:09.86134987 +0000 UTC m=+0.183359298 container remove 1491b8fac1c122e0dd648c28a87197d597b6474d6588a4ecdc0ee23b833206d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 25 10:11:09 compute-0 systemd[1]: libpod-conmon-1491b8fac1c122e0dd648c28a87197d597b6474d6588a4ecdc0ee23b833206d3.scope: Deactivated successfully.
Nov 25 10:11:09 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.18978 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:09 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 25 10:11:09 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2182540642' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:11:09 compute-0 nova_compute[253512]: 2025-11-25 10:11:09.973 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:10 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 10:11:10 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:11:10 compute-0 podman[289206]: 2025-11-25 10:11:10.07234734 +0000 UTC m=+0.055766440 container create 97d869c26603773f41dd10948a62af9867285cf33c6418f9e0f89fab2d62f809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 25 10:11:10 compute-0 systemd[1]: Started libpod-conmon-97d869c26603773f41dd10948a62af9867285cf33c6418f9e0f89fab2d62f809.scope.
Nov 25 10:11:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/107e3abe7791c3c09ce6b03a9d0313af71120624f3e38ac97109f87c4129f346/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/107e3abe7791c3c09ce6b03a9d0313af71120624f3e38ac97109f87c4129f346/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/107e3abe7791c3c09ce6b03a9d0313af71120624f3e38ac97109f87c4129f346/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/107e3abe7791c3c09ce6b03a9d0313af71120624f3e38ac97109f87c4129f346/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/107e3abe7791c3c09ce6b03a9d0313af71120624f3e38ac97109f87c4129f346/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:10 compute-0 podman[289206]: 2025-11-25 10:11:10.140761768 +0000 UTC m=+0.124180888 container init 97d869c26603773f41dd10948a62af9867285cf33c6418f9e0f89fab2d62f809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_ptolemy, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.18912 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.28528 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.28534 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1364061829' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3774019563' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:10 compute-0 podman[289206]: 2025-11-25 10:11:10.047991865 +0000 UTC m=+0.031410985 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4051379379' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.28673 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.18942 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.28679 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2335620262' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2787533441' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/698441370' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.28703 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.28709 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.18978 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2182540642' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2712726419' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2520612999' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28591 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 podman[289206]: 2025-11-25 10:11:10.160809064 +0000 UTC m=+0.144228164 container start 97d869c26603773f41dd10948a62af9867285cf33c6418f9e0f89fab2d62f809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 25 10:11:10 compute-0 podman[289206]: 2025-11-25 10:11:10.162309983 +0000 UTC m=+0.145729093 container attach 97d869c26603773f41dd10948a62af9867285cf33c6418f9e0f89fab2d62f809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_ptolemy, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:11:10 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:11:10] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:11:10 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:11:10] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:11:10 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 10:11:10 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:11:10 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19002 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28760 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 25 10:11:10 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2624300683' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:11:10 compute-0 laughing_ptolemy[289249]: --> passed data devices: 0 physical, 1 LVM
Nov 25 10:11:10 compute-0 laughing_ptolemy[289249]: --> All data devices are unavailable
Nov 25 10:11:10 compute-0 systemd[1]: libpod-97d869c26603773f41dd10948a62af9867285cf33c6418f9e0f89fab2d62f809.scope: Deactivated successfully.
Nov 25 10:11:10 compute-0 podman[289206]: 2025-11-25 10:11:10.458814969 +0000 UTC m=+0.442234069 container died 97d869c26603773f41dd10948a62af9867285cf33c6418f9e0f89fab2d62f809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 10:11:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-107e3abe7791c3c09ce6b03a9d0313af71120624f3e38ac97109f87c4129f346-merged.mount: Deactivated successfully.
Nov 25 10:11:10 compute-0 podman[289206]: 2025-11-25 10:11:10.508022626 +0000 UTC m=+0.491441726 container remove 97d869c26603773f41dd10948a62af9867285cf33c6418f9e0f89fab2d62f809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 10:11:10 compute-0 systemd[1]: libpod-conmon-97d869c26603773f41dd10948a62af9867285cf33c6418f9e0f89fab2d62f809.scope: Deactivated successfully.
Nov 25 10:11:10 compute-0 sudo[288998]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:10 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: ops {prefix=ops} (starting...)
Nov 25 10:11:10 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:11:10 compute-0 sudo[289324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:11:10 compute-0 sudo[289324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:10 compute-0 sudo[289324]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:10 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28615 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28621 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19041 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:10 compute-0 sudo[289365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- lvm list --format json
Nov 25 10:11:10 compute-0 sudo[289365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 25 10:11:10 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3543896602' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:11:10 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:10 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 25 10:11:10 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/443310296' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28645 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 25 10:11:11 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:11 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:11 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:11 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:11.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:11 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19056 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/605611227' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.28591 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.19002 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2543856292' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.28760 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2624300683' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2915980700' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3634701221' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.28615 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.28621 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.19041 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3543896602' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/62713454' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/443310296' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.28645 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1167894320' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mon[74207]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: session ls {prefix=session ls} (starting...)
Nov 25 10:11:11 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw Can't run that command on an inactive MDS!
Nov 25 10:11:11 compute-0 podman[289478]: 2025-11-25 10:11:11.28924259 +0000 UTC m=+0.052364720 container create 024c76bef1608b6c24874c6cdf9183895dc8f67333e3fceb8ec5f0856bd4ee37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_hopper, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:11:11 compute-0 systemd[1]: Started libpod-conmon-024c76bef1608b6c24874c6cdf9183895dc8f67333e3fceb8ec5f0856bd4ee37.scope.
Nov 25 10:11:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 25 10:11:11 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1938353341' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:11:11 compute-0 podman[289478]: 2025-11-25 10:11:11.347697315 +0000 UTC m=+0.110819455 container init 024c76bef1608b6c24874c6cdf9183895dc8f67333e3fceb8ec5f0856bd4ee37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_hopper, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:11:11 compute-0 podman[289478]: 2025-11-25 10:11:11.351844663 +0000 UTC m=+0.114966783 container start 024c76bef1608b6c24874c6cdf9183895dc8f67333e3fceb8ec5f0856bd4ee37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 10:11:11 compute-0 podman[289478]: 2025-11-25 10:11:11.352875936 +0000 UTC m=+0.115998057 container attach 024c76bef1608b6c24874c6cdf9183895dc8f67333e3fceb8ec5f0856bd4ee37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_hopper, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 10:11:11 compute-0 compassionate_hopper[289496]: 167 167
Nov 25 10:11:11 compute-0 systemd[1]: libpod-024c76bef1608b6c24874c6cdf9183895dc8f67333e3fceb8ec5f0856bd4ee37.scope: Deactivated successfully.
Nov 25 10:11:11 compute-0 conmon[289496]: conmon 024c76bef1608b6c2487 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-024c76bef1608b6c24874c6cdf9183895dc8f67333e3fceb8ec5f0856bd4ee37.scope/container/memory.events
Nov 25 10:11:11 compute-0 podman[289478]: 2025-11-25 10:11:11.357528085 +0000 UTC m=+0.120650204 container died 024c76bef1608b6c24874c6cdf9183895dc8f67333e3fceb8ec5f0856bd4ee37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_hopper, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 25 10:11:11 compute-0 nova_compute[253512]: 2025-11-25 10:11:11.366 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:11 compute-0 podman[289478]: 2025-11-25 10:11:11.265798953 +0000 UTC m=+0.028921093 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca5e695e68200ac8a22801ad39a957cbe62c514f311f923c31795b61e96a1ad3-merged.mount: Deactivated successfully.
Nov 25 10:11:11 compute-0 podman[289478]: 2025-11-25 10:11:11.387473756 +0000 UTC m=+0.150595876 container remove 024c76bef1608b6c24874c6cdf9183895dc8f67333e3fceb8ec5f0856bd4ee37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 10:11:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 25 10:11:11 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:11 compute-0 systemd[1]: libpod-conmon-024c76bef1608b6c24874c6cdf9183895dc8f67333e3fceb8ec5f0856bd4ee37.scope: Deactivated successfully.
Nov 25 10:11:11 compute-0 ceph-mds[95869]: mds.cephfs.compute-0.wjveyw asok_command: status {prefix=status} (starting...)
Nov 25 10:11:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 25 10:11:11 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1772863110' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:11 compute-0 podman[289554]: 2025-11-25 10:11:11.602803893 +0000 UTC m=+0.053137064 container create d64ec04b3452f11ba408e155165ceb9b6bc859eb88412e49786a6b335a73af63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hofstadter, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 10:11:11 compute-0 systemd[1]: Started libpod-conmon-d64ec04b3452f11ba408e155165ceb9b6bc859eb88412e49786a6b335a73af63.scope.
Nov 25 10:11:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc3beb5a6efa1dd3f1e3acd73187b2e257f5468388ea23459bb1d4c321bade2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc3beb5a6efa1dd3f1e3acd73187b2e257f5468388ea23459bb1d4c321bade2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc3beb5a6efa1dd3f1e3acd73187b2e257f5468388ea23459bb1d4c321bade2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc3beb5a6efa1dd3f1e3acd73187b2e257f5468388ea23459bb1d4c321bade2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:11 compute-0 podman[289554]: 2025-11-25 10:11:11.659564025 +0000 UTC m=+0.109897196 container init d64ec04b3452f11ba408e155165ceb9b6bc859eb88412e49786a6b335a73af63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 25 10:11:11 compute-0 podman[289554]: 2025-11-25 10:11:11.665865884 +0000 UTC m=+0.116199044 container start d64ec04b3452f11ba408e155165ceb9b6bc859eb88412e49786a6b335a73af63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:11:11 compute-0 podman[289554]: 2025-11-25 10:11:11.673680343 +0000 UTC m=+0.124013523 container attach d64ec04b3452f11ba408e155165ceb9b6bc859eb88412e49786a6b335a73af63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 10:11:11 compute-0 podman[289554]: 2025-11-25 10:11:11.587791708 +0000 UTC m=+0.038124869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:11:11 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28859 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:11 compute-0 ceph-mgr[74476]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 10:11:11 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T10:11:11.810+0000 7f5ef14f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 10:11:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 25 10:11:11 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4106823401' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]: {
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:     "1": [
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:         {
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             "devices": [
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "/dev/loop3"
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             ],
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             "lv_name": "ceph_lv0",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             "lv_size": "21470642176",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=26fb5eac-2c31-4a21-bbae-433f98108699,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             "lv_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             "name": "ceph_lv0",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             "tags": {
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.block_uuid": "R4liRd-QAut-sOPj-84FL-wgU1-vF6K-uqhBIx",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.cluster_name": "ceph",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.crush_device_class": "",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.encrypted": "0",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.osd_fsid": "26fb5eac-2c31-4a21-bbae-433f98108699",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.osd_id": "1",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.type": "block",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.vdo": "0",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:                 "ceph.with_tpm": "0"
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             },
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             "type": "block",
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:             "vg_name": "ceph_vg0"
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:         }
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]:     ]
Nov 25 10:11:11 compute-0 clever_hofstadter[289579]: }
Nov 25 10:11:11 compute-0 systemd[1]: libpod-d64ec04b3452f11ba408e155165ceb9b6bc859eb88412e49786a6b335a73af63.scope: Deactivated successfully.
Nov 25 10:11:11 compute-0 podman[289554]: 2025-11-25 10:11:11.93152477 +0000 UTC m=+0.381857942 container died d64ec04b3452f11ba408e155165ceb9b6bc859eb88412e49786a6b335a73af63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cc3beb5a6efa1dd3f1e3acd73187b2e257f5468388ea23459bb1d4c321bade2-merged.mount: Deactivated successfully.
Nov 25 10:11:11 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 10:11:11 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/571812981' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:11 compute-0 podman[289554]: 2025-11-25 10:11:11.985644056 +0000 UTC m=+0.435977217 container remove d64ec04b3452f11ba408e155165ceb9b6bc859eb88412e49786a6b335a73af63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hofstadter, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 25 10:11:11 compute-0 systemd[1]: libpod-conmon-d64ec04b3452f11ba408e155165ceb9b6bc859eb88412e49786a6b335a73af63.scope: Deactivated successfully.
Nov 25 10:11:12 compute-0 sudo[289365]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 25 10:11:12 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/699907023' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:11:12 compute-0 sudo[289653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 10:11:12 compute-0 sudo[289653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.19056 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2725417152' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4244050022' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1938353341' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1401436123' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1010831103' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1772863110' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2801456606' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3785103193' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/814928782' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.28859 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4106823401' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3375895743' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/571812981' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/699907023' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 10:11:12 compute-0 sudo[289653]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:12 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28883 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mgr[74476]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 10:11:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T10:11:12.172+0000 7f5ef14f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 10:11:12 compute-0 sudo[289683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 -- raw list --format json
Nov 25 10:11:12 compute-0 sudo[289683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 25 10:11:12 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1898159654' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19149 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mgr[74476]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 10:11:12 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: 2025-11-25T10:11:12.611+0000 7f5ef14f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 10:11:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 10:11:12 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554816375' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:12 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28928 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:12 compute-0 podman[289834]: 2025-11-25 10:11:12.848418895 +0000 UTC m=+0.048390618 container create 4e9b95441765fe3602b76784a10b9ab7cc5bb5de53f775933db39a271894abdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:11:12 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:12 compute-0 systemd[1]: Started libpod-conmon-4e9b95441765fe3602b76784a10b9ab7cc5bb5de53f775933db39a271894abdf.scope.
Nov 25 10:11:12 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 25 10:11:12 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2761747961' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:11:12 compute-0 podman[289834]: 2025-11-25 10:11:12.916925817 +0000 UTC m=+0.116897551 container init 4e9b95441765fe3602b76784a10b9ab7cc5bb5de53f775933db39a271894abdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lalande, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:11:12 compute-0 podman[289834]: 2025-11-25 10:11:12.831767099 +0000 UTC m=+0.031738843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:11:12 compute-0 podman[289834]: 2025-11-25 10:11:12.933162059 +0000 UTC m=+0.133133793 container start 4e9b95441765fe3602b76784a10b9ab7cc5bb5de53f775933db39a271894abdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:11:12 compute-0 intelligent_lalande[289857]: 167 167
Nov 25 10:11:12 compute-0 systemd[1]: libpod-4e9b95441765fe3602b76784a10b9ab7cc5bb5de53f775933db39a271894abdf.scope: Deactivated successfully.
Nov 25 10:11:12 compute-0 podman[289834]: 2025-11-25 10:11:12.934797802 +0000 UTC m=+0.134769526 container attach 4e9b95441765fe3602b76784a10b9ab7cc5bb5de53f775933db39a271894abdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lalande, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:11:12 compute-0 podman[289834]: 2025-11-25 10:11:12.935521426 +0000 UTC m=+0.135493170 container died 4e9b95441765fe3602b76784a10b9ab7cc5bb5de53f775933db39a271894abdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 10:11:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-de9bad4c462c626aad2d17c04546e041d28406229ca5d238a85933db5f521318-merged.mount: Deactivated successfully.
Nov 25 10:11:12 compute-0 podman[289834]: 2025-11-25 10:11:12.969999981 +0000 UTC m=+0.169971706 container remove 4e9b95441765fe3602b76784a10b9ab7cc5bb5de53f775933db39a271894abdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lalande, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:11:12 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28949 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:12 compute-0 systemd[1]: libpod-conmon-4e9b95441765fe3602b76784a10b9ab7cc5bb5de53f775933db39a271894abdf.scope: Deactivated successfully.
Nov 25 10:11:13 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28964 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:13.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:13 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:13 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:11:13 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:13.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:11:13 compute-0 podman[289883]: 2025-11-25 10:11:13.108127957 +0000 UTC m=+0.033003706 container create 98b9d26b04250b85f205388fe66d52728e0643eb981c2ebe958e1aa1d9e85b32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 10:11:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 25 10:11:13 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2399510507' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:11:13 compute-0 systemd[1]: Started libpod-conmon-98b9d26b04250b85f205388fe66d52728e0643eb981c2ebe958e1aa1d9e85b32.scope.
Nov 25 10:11:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e965610feeb23624306a971967bbd7837f5f8ebef55dcdd95d225fbec85e1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e965610feeb23624306a971967bbd7837f5f8ebef55dcdd95d225fbec85e1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e965610feeb23624306a971967bbd7837f5f8ebef55dcdd95d225fbec85e1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e965610feeb23624306a971967bbd7837f5f8ebef55dcdd95d225fbec85e1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:11:13 compute-0 podman[289883]: 2025-11-25 10:11:13.171013935 +0000 UTC m=+0.095889705 container init 98b9d26b04250b85f205388fe66d52728e0643eb981c2ebe958e1aa1d9e85b32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3917255956' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.28883 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/694499281' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2000504221' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1958780118' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1657872299' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1898159654' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.19149 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2554816375' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.28928 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3866070346' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2761747961' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2972446715' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.28949 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.28964 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/997004426' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2399510507' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:11:13 compute-0 podman[289883]: 2025-11-25 10:11:13.182775013 +0000 UTC m=+0.107650763 container start 98b9d26b04250b85f205388fe66d52728e0643eb981c2ebe958e1aa1d9e85b32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 10:11:13 compute-0 podman[289883]: 2025-11-25 10:11:13.185927204 +0000 UTC m=+0.110802974 container attach 98b9d26b04250b85f205388fe66d52728e0643eb981c2ebe958e1aa1d9e85b32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 10:11:13 compute-0 podman[289883]: 2025-11-25 10:11:13.09317835 +0000 UTC m=+0.018054100 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 10:11:13 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28976 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28801 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 25 10:11:13 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1422420507' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29003 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28819 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 25 10:11:13 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3558068328' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:13 compute-0 modest_shockley[289900]: {}
Nov 25 10:11:13 compute-0 lvm[290056]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:11:13 compute-0 lvm[290056]: VG ceph_vg0 finished
Nov 25 10:11:13 compute-0 podman[289883]: 2025-11-25 10:11:13.818998344 +0000 UTC m=+0.743874104 container died 98b9d26b04250b85f205388fe66d52728e0643eb981c2ebe958e1aa1d9e85b32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 10:11:13 compute-0 systemd[1]: libpod-98b9d26b04250b85f205388fe66d52728e0643eb981c2ebe958e1aa1d9e85b32.scope: Deactivated successfully.
Nov 25 10:11:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-63e965610feeb23624306a971967bbd7837f5f8ebef55dcdd95d225fbec85e1a-merged.mount: Deactivated successfully.
Nov 25 10:11:13 compute-0 podman[289883]: 2025-11-25 10:11:13.850126375 +0000 UTC m=+0.775002125 container remove 98b9d26b04250b85f205388fe66d52728e0643eb981c2ebe958e1aa1d9e85b32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:11:13 compute-0 systemd[1]: libpod-conmon-98b9d26b04250b85f205388fe66d52728e0643eb981c2ebe958e1aa1d9e85b32.scope: Deactivated successfully.
Nov 25 10:11:13 compute-0 sudo[289683]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 25 10:11:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:13 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 25 10:11:13 compute-0 ceph-mon[74207]: log_channel(audit) log [INF] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:13 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28834 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:13 compute-0 sudo[290071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 10:11:14 compute-0 sudo[290071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:14 compute-0 sudo[290071]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29018 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29021 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/595608251' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.28976 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.28801 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2477092599' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2482510223' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1422420507' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.29003 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.28819 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2056844208' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3558068328' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.28834 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.29018 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1093066099' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3373365784' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 25 10:11:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1330753870' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29039 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29045 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29048 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19299 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29075 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28891 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:14 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19323 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:14 compute-0 nova_compute[253512]: 2025-11-25 10:11:14.974 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:14 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 25 10:11:14 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:11:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:15.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29105 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:15 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:15 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:15 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:15.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.29021 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1330753870' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.29039 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.29045 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.29048 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1907537029' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3730769747' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3024921306' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.19299 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.29075 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2503635589' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.28891 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.19323 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/866828572' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.29105 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2441670351' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28927 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19338 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 10:11:15 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471746005' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29132 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19365 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19371 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 25 10:11:15 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089255639' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29159 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.28990 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:15 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 25 10:11:15 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936966365' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19407 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29020 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 25 10:11:16 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3996815968' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.28927 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.19338 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3471746005' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.29132 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3914276966' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.19365 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.19371 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/372716450' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4089255639' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.29159 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.28990 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1256456425' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.19401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1936966365' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.19407 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/94335548' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3996815968' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19431 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-0 nova_compute[253512]: 2025-11-25 10:11:16.368 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:16 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19455 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29237 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 25 10:11:16 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3894594024' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 25 10:11:16 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1034596140' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:11:16 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:16 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29074 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:11:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:17.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:11:17 compute-0 podman[290815]: 2025-11-25 10:11:17.027934157 +0000 UTC m=+0.093377010 container health_status c3149e22320f34b3033fb64786619abdf61c59a086af42648b1e57066bbbcb62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 10:11:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 25 10:11:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982949518' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:11:17 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:17 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:17 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:17.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:17.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:17.120Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:17.120Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:17 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:17.120Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.29020 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.19431 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3907824213' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/13099019' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2661222347' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.19455 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.29237 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/34667073' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3894594024' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/869887195' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1034596140' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.29074 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2768433089' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1222381632' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2982949518' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2615350652' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2630515814' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19506 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 25 10:11:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3328015327' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:11:17 compute-0 ceph-mon[74207]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7487 writes, 33K keys, 7487 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 7487 writes, 7487 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1612 writes, 7549 keys, 1612 commit groups, 1.0 writes per commit group, ingest: 12.23 MB, 0.02 MB/s
                                           Interval WAL: 1612 writes, 1612 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    399.7      0.13              0.08        19    0.007       0      0       0.0       0.0
                                             L6      1/0   12.15 MB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   4.5    522.2    446.8      0.51              0.36        18    0.029    100K   9956       0.0       0.0
                                            Sum      1/0   12.15 MB   0.0      0.3     0.0      0.2       0.3      0.1       0.0   5.5    418.5    437.4      0.64              0.45        37    0.017    100K   9956       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8    466.7    470.3      0.17              0.12        10    0.017     33K   3094       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   0.0    522.2    446.8      0.51              0.36        18    0.029    100K   9956       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    404.5      0.13              0.08        18    0.007       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     27.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.27 GB write, 0.12 MB/s write, 0.26 GB read, 0.11 MB/s read, 0.6 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e6ae573350#2 capacity: 304.00 MB usage: 25.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000149 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1623,25.21 MB,8.29194%) FilterBlock(38,287.05 KB,0.0922103%) IndexBlock(38,495.53 KB,0.159183%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 10:11:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 25 10:11:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740963012' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 25 10:11:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4252505220' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 25 10:11:17 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1708222253' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 5128192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936595 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:29.327105+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 5120000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:30.327210+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c45c00 session 0x564fb3cd1a40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c44000 session 0x564fb30ae960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 5111808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:31.327367+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 5103616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:32.327566+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 5103616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:33.327699+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 5103616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936595 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:34.327847+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 5095424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:35.327978+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 5095424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:36.328064+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 5087232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:37.328164+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 5087232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:38.328291+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 5087232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936595 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:39.328389+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 5070848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:40.328493+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c44000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.600059509s of 38.601051331s, submitted: 1
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 5070848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:41.328591+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 5070848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:42.328737+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 5062656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:43.328849+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 5062656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938239 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:44.328974+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 5054464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:45.329126+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 5054464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:46.329221+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 5046272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:47.329315+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 5046272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:48.329424+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 5046272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938239 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:49.329537+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 5038080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:50.329657+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 5038080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:51.329795+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 5029888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:52.329952+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 5029888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:53.330087+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 5021696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938239 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:54.330214+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 5021696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:55.330355+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 5013504 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:56.330490+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.999441147s of 16.002183914s, submitted: 2
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 5005312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:57.330583+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 5005312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:58.330680+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 5005312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938107 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:38:59.330782+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4997120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:00.330933+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4988928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:01.331050+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4988928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:02.331180+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4988928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:03.331304+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 4980736 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938107 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:04.331439+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 4972544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:05.331560+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 4972544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:06.331685+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 4964352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:07.331837+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 4964352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:08.331928+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 4956160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938107 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:09.332017+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 4956160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:10.332106+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 4947968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:11.332192+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 4947968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:12.332300+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 4947968 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:13.332402+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 4939776 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938107 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:14.332533+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c44000 session 0x564fb5afef00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:15.332660+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 4931584 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:16.333472+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 4923392 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:17.333596+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4915200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:18.333717+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4915200 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:19.333818+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 4907008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938107 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:20.333948+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 4907008 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:21.334049+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 4898816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:22.334169+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 4898816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:23.334289+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 4898816 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:24.334424+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 4890624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938107 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.253587723s of 28.255268097s, submitted: 1
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:25.334532+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 4890624 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:26.334639+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4882432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:27.334748+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4882432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:28.334847+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4882432 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:29.334981+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 4874240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939751 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:30.335089+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 4874240 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:31.335181+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 4866048 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:32.335363+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 4866048 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:33.335584+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 4857856 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:34.335690+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 4857856 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939160 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:35.335824+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 4849664 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:36.335919+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 4841472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:37.336074+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 4841472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:38.336165+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 4841472 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:39.336267+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 4833280 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939160 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:40.336361+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 4825088 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.733262062s of 15.736958504s, submitted: 3
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:41.336527+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 4808704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:42.336639+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 4808704 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:43.336739+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 4800512 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:44.336849+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 4800512 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939028 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:45.337008+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 4800512 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:46.337110+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 4792320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:47.337243+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 4792320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:48.337364+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 4792320 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:49.337469+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 4784128 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939028 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:50.337588+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 4775936 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:51.337679+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4767744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:52.337945+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4767744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:53.338047+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4767744 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:54.338148+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 4759552 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939028 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:55.338245+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 4759552 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:56.338341+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 4751360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:57.338441+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 4751360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:58.338546+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 4751360 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:39:59.338654+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 4743168 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939028 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:00.338750+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 4743168 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:01.338858+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 4726784 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:02.338941+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 4726784 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:03.339042+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 4726784 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501a400 session 0x564fb6126d20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:04.339149+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 4718592 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939028 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:05.339252+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 4718592 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:06.339354+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4710400 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:07.339480+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4710400 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:08.339591+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4710400 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:09.339703+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4702208 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939028 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:10.339822+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4702208 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:11.339925+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4710400 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:12.340034+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4702208 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:13.340143+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4702208 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb4219c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.208274841s of 33.209514618s, submitted: 1
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:14.340244+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 4694016 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939160 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:15.340339+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 4677632 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:16.340439+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 4669440 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:17.340551+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 4669440 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:18.340688+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 4661248 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:19.340834+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 4661248 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939160 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb4fc9c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:20.340943+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 4661248 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:21.341079+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 4653056 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:22.341235+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 4653056 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:23.341341+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 4644864 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:24.341410+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 4644864 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940081 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:25.341507+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 4644864 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:26.341604+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 4636672 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:27.341716+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 4628480 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:28.341817+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 4628480 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:29.342246+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 4620288 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940081 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:30.342404+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 4620288 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.985671997s of 16.989835739s, submitted: 3
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:31.342507+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 4620288 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:32.342632+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 4612096 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:33.342720+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 4612096 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:34.342808+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 4612096 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939949 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:35.342916+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 4603904 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:36.343005+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 4603904 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:37.343098+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 4595712 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:38.343187+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 4595712 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:39.343286+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 4595712 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939949 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:40.343379+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4587520 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:41.343527+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4587520 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:42.343693+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4587520 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:43.343823+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4579328 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:44.343935+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4579328 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939949 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:45.344096+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4571136 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:46.344226+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4562944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:47.344352+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4562944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:48.344482+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 4554752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb4219c00 session 0x564fb3f943c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb4fc9c00 session 0x564fb3cd2960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:49.344583+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 4554752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939949 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:50.344706+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4538368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:51.344821+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4538368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:52.344952+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4538368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:53.345050+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4530176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:54.345157+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4530176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939949 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:55.345273+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4521984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:56.345371+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4521984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:57.345466+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4521984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:58.345592+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4513792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:40:59.345715+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4513792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939949 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.493139267s of 28.494064331s, submitted: 1
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:00.345819+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4513792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:01.345937+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 4505600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:02.346062+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 4505600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:03.346157+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4489216 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:04.346272+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4489216 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943105 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:05.346406+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4489216 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:06.346526+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 4481024 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:07.346640+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 4481024 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:08.346732+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 4472832 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:09.346835+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 4472832 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943105 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:10.346940+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 4472832 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:11.347043+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 4456448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.036651611s of 12.040016174s, submitted: 3
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:12.347164+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 4456448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:13.347259+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 4448256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:14.347419+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 4440064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942382 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:15.347515+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 4423680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:16.347616+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 4415488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:17.347712+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 4415488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:18.347860+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 4407296 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:19.347969+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 4407296 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942382 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:20.348059+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 4407296 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:21.348164+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 4399104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3b7b400 session 0x564fb3d7e960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501a800 session 0x564fb6550d20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:22.348292+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 4399104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:23.348376+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 4390912 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:24.348489+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 4390912 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942382 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:25.348594+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 4382720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:26.348694+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 4382720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:27.348799+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 4382720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:28.348931+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 4382720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:29.349092+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 4374528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942382 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:30.349191+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 4374528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:31.349299+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 4358144 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:32.349551+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 4358144 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:33.349659+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 4349952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:34.349770+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 4349952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942382 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:35.349861+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4341760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:36.349928+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4341760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:37.350082+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4341760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:38.350224+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 4333568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c44000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.160346985s of 27.163261414s, submitted: 2
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:39.350361+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 4333568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942514 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:40.350474+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 4325376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:41.350578+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 4325376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:42.350716+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 4325376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:43.350868+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 4317184 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:44.350936+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 4317184 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944026 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb4219c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:45.351038+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 4300800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:46.351154+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 4300800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:47.351265+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:48.351368+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501b800 session 0x564fb3d7e780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c45c00 session 0x564fb3d7e1e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:49.351481+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944026 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.999965668s of 11.002209663s, submitted: 2
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:50.351614+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:51.351746+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:52.351879+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 4284416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:53.351997+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 4300800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:54.352098+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 4300800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943303 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:55.352215+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4292608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:56.352307+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 4284416 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:57.352403+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 4276224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8035 writes, 32K keys, 8035 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 8035 writes, 1713 syncs, 4.69 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8035 writes, 32K keys, 8035 commit groups, 1.0 writes per commit group, ingest: 20.90 MB, 0.03 MB/s
                                           Interval WAL: 8035 writes, 1713 syncs, 4.69 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:58.352502+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 4218880 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:41:59.352640+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 4202496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943303 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:00.352757+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 4202496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:01.352851+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 4202496 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:02.352925+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 4194304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.933315277s of 12.937394142s, submitted: 3
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:03.353034+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 4194304 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:04.353134+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 4186112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944947 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:05.353228+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 4186112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:06.353322+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 4186112 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:07.353415+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 4177920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:08.353558+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 4177920 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:09.353676+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 4169728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945868 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:10.353815+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 4169728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:11.353920+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 4169728 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:12.354045+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 4161536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:13.354139+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 4161536 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:14.354303+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 4153344 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945868 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.422577858s of 12.426486015s, submitted: 3
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:15.354424+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 4153344 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:16.354576+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 4145152 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:17.354663+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 4136960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:18.354759+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 4136960 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:19.354871+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 4120576 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:20.354999+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 4120576 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:21.355150+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 4112384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:22.355309+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 4112384 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:23.355447+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 4104192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:24.355592+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 4104192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:25.355717+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 4104192 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:26.355868+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 4096000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:27.355940+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 4096000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:28.356056+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 4087808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:29.356157+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 4079616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:30.356269+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 4079616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:31.356364+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:32.356466+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4071424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:33.356599+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4063232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:34.356743+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4063232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:35.356835+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4063232 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:36.356952+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 4055040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:37.357066+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 4055040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:38.357239+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 4046848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:39.357356+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 4046848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:40.357491+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 4038656 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:41.357600+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 4030464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:42.357934+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 4030464 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:43.358031+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 4022272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:44.358129+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 4022272 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:45.358236+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 4014080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:46.358349+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 4014080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:47.358456+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4005888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:48.358640+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4005888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:49.358764+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4005888 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:50.358887+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 3997696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:51.358964+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 3997696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:52.359132+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 3989504 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:53.359242+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 3989504 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:54.359342+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 3981312 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:55.359431+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 3973120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:56.359520+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 3973120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:57.359929+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 3973120 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:58.360038+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 3964928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:42:59.360128+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 3964928 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:00.360222+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 3956736 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:01.360322+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 3948544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:02.360445+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 3948544 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:03.360560+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 3940352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:04.360675+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 3940352 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:05.360786+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 3932160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:06.360885+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 3932160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:07.361020+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca7f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 3932160 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 52.612483978s of 52.613750458s, submitted: 1
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:08.361114+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3538944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:09.361231+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3538944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:10.361329+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3538944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:11.361434+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3538944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:12.361594+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3538944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:13.361718+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3538944 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:14.361814+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3530752 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:15.361917+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:16.362017+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:17.362126+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:18.362215+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:19.362328+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:20.362467+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:21.362612+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:22.362753+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:23.362877+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:24.362925+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:25.363024+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:26.363131+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:27.363222+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:28.363363+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:29.363490+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:30.363606+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:31.363714+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:32.363827+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:33.363938+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:34.364038+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:35.364136+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:36.364231+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:37.364338+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3522560 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:38.364436+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3514368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:39.364537+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3514368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:40.364965+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3514368 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:41.365054+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3506176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:42.365203+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3506176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:43.365325+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3506176 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:44.365448+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3497984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:45.365593+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3497984 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:46.365688+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 3489792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:47.365847+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 3489792 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:48.365925+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3481600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:49.366012+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3481600 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:50.366104+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 3473408 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:51.366203+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 3473408 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:52.366314+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 3473408 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:53.366413+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3448832 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:54.366511+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3448832 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:55.366683+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3448832 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:56.366777+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3440640 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:57.366886+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3440640 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:58.367007+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3440640 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:43:59.367143+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3440640 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:00.367311+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:01.367412+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:02.367526+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:03.367632+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:04.367725+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:05.367886+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:06.368018+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:07.368118+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:08.368221+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:09.368322+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:10.368430+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:11.368526+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:12.368645+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:13.368734+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:14.368827+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:15.368934+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:16.369028+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:17.369138+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:18.369252+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:19.369377+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:20.369468+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:21.369564+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3432448 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:22.369671+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3424256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:23.369842+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3424256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:24.369945+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3424256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:25.370043+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3424256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:26.370147+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3424256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:27.370260+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3424256 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:28.370363+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3416064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:29.370475+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3416064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:30.370590+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3416064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:31.370684+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3416064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:32.370818+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3416064 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:33.370941+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:34.371068+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:35.371172+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:36.371278+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb4219c00 session 0x564fb3cf8780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c44000 session 0x564fb6550000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:37.371374+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:38.371476+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:39.371826+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:40.371934+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:41.372029+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:42.372150+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:43.372259+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:44.372351+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945736 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:45.372454+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 3407872 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:46.372565+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 98.840316772s of 99.029678345s, submitted: 354
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 3399680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:47.372699+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 3399680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:48.372799+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 3399680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:49.372929+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 3399680 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945868 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:50.373034+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 3391488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:51.373146+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 3391488 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:52.373274+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:53.373379+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:54.373508+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947380 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:55.373631+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:56.373745+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:57.373865+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:58.373926+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:44:59.374023+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946198 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:00.374128+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:01.374220+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:02.374393+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:03.374498+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.987428665s of 16.990921021s, submitted: 4
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:04.374640+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:05.374785+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:06.374900+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:07.375001+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:08.375113+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:09.375215+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:10.375315+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:11.375417+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:12.375528+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:13.375633+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3375104 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:14.375756+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3366912 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:15.375868+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3366912 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:16.375967+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:17.376063+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:18.376165+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:19.376270+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:20.376391+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:21.376490+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:22.376631+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:23.376722+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:24.376812+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:25.376923+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:26.377012+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:27.377138+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:28.377247+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:29.377358+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:30.377491+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:31.377577+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:32.377676+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:33.377775+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:34.377875+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:35.377932+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3358720 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:36.378019+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:37.378123+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:38.378290+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:39.378405+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:40.378503+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:41.378592+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:42.378697+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:43.378820+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:44.379149+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:45.379247+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:46.379339+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:47.379443+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:48.379546+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:49.379688+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:50.379790+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:51.379936+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:52.380070+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:53.380188+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:54.380303+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:55.380414+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:56.380513+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:57.380611+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:58.380712+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:45:59.380812+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:00.380960+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:01.381046+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:02.381147+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:03.381241+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:04.381342+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:05.382877+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:06.382946+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501a800 session 0x564fb31a9c20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c45c00 session 0x564fb31a90e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:07.383061+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:08.383166+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:09.383288+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3350528 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:10.383403+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 3342336 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:11.384005+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 3342336 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:12.384165+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 3342336 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:13.384272+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 3342336 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:14.384384+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946066 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:15.384512+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:16.384654+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 73.012329102s of 73.013580322s, submitted: 1
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:17.384826+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:18.384935+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:19.385045+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946198 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:20.385143+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:21.385346+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:22.385466+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:23.385573+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:24.385680+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946198 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:25.385784+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:26.385960+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:27.386123+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb322d400 session 0x564fb31a85a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501b400 session 0x564fb31a9860
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:28.386226+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.043398857s of 12.045754433s, submitted: 1
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3325952 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:29.386330+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945607 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:30.386430+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:31.386521+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:32.386626+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:33.386748+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:34.386858+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:35.386924+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945475 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:36.387014+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:37.387108+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:38.387272+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:39.387377+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:40.387480+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945607 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:41.387574+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3317760 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:42.387703+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:43.387796+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:44.387950+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:45.388099+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945607 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:46.388191+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:47.388290+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:48.388393+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:49.388487+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:50.388582+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3309568 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945607 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:51.388677+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:52.388791+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb403c400 session 0x564fb32d7e00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501b800 session 0x564fb3d7e000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:53.388884+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:54.388992+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.741083145s of 25.744991302s, submitted: 3
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:55.389097+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945475 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:56.389188+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:57.389281+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:58.389407+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:46:59.389568+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:00.389723+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945475 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:01.389828+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3301376 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:02.390012+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 3293184 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:03.390114+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 3293184 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:04.390223+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 3293184 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:05.390390+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 3293184 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945607 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.234606743s of 11.236687660s, submitted: 2
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:06.390491+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:07.390612+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:08.390707+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:09.390821+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:10.390930+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948631 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:11.391037+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:12.391166+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:13.391270+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:14.391365+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:15.391455+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948631 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:16.391549+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.965599060s of 10.967306137s, submitted: 2
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:17.391647+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:18.391708+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:19.391814+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:20.391924+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948499 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb403c400 session 0x564fb538ed20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:21.392013+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:22.392118+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:23.392243+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:24.392359+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:25.392464+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948499 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:26.392560+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:27.392653+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 3276800 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:28.392729+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 3268608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:29.392862+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 3268608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:30.393005+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 3268608 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948499 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.246407509s of 14.247385979s, submitted: 1
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:31.393101+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:32.393242+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:33.393351+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:34.393456+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:35.393557+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950143 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:36.393669+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:37.393771+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:38.393976+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:39.394095+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:40.394189+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948961 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:41.394279+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:42.394395+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:43.394495+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:44.394600+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:45.394717+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948961 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:46.394806+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:47.394917+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3252224 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.738798141s of 16.743886948s, submitted: 4
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:48.395076+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3244032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:49.395207+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3244032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:50.395328+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3244032 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:51.395462+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3235840 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:52.395594+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:53.395703+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:54.395813+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:55.395920+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:56.396045+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:57.396137+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:58.396234+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:47:59.396333+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:00.396427+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:01.396581+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:02.396726+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:03.396863+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:04.397002+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:05.397136+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:06.397300+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:07.397469+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:08.397635+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:09.397769+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:10.397951+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:11.398075+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:12.398236+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb403d800 session 0x564fb3d80960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501a400 session 0x564fb5afeb40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:13.398350+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:14.398485+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:15.398611+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:16.398710+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:17.398801+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:18.398921+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:19.399027+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:20.399148+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948829 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:21.399261+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:22.399395+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:23.399731+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.605865479s of 35.606979370s, submitted: 1
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:24.399964+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:25.400107+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948961 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:26.400298+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:27.400461+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:28.400556+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3227648 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:29.400665+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:30.400793+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948961 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:31.400887+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:32.401019+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:33.401136+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:34.401244+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:35.401461+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948370 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:36.401617+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:37.401752+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:38.401868+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:39.402013+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.297077179s of 16.299779892s, submitted: 2
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:40.402158+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:41.402282+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:42.402441+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:43.402579+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:44.402696+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:45.402839+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:46.402943+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:47.403108+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:48.403245+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:49.403381+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:50.403478+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:51.403580+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3219456 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:52.403720+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:53.403841+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:54.403993+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:55.404138+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:56.404267+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:57.404408+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:58.404538+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:48:59.404675+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:00.404847+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:01.404932+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:02.405034+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:03.405126+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:04.405231+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:05.405329+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:06.405437+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:07.405563+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:08.405696+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:09.405787+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:10.405942+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:11.406106+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:12.406276+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:13.406429+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:14.406553+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3211264 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:15.406683+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3203072 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: mgrc ms_handle_reset ms_handle_reset con 0x564fb403cc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/92811439
Nov 25 10:11:17 compute-0 ceph-osd[82261]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/92811439,v1:192.168.122.100:6801/92811439]
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: get_auth_request con 0x564fb3c44000 auth_method 0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: mgrc handle_mgr_configure stats_period=5
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:16.406836+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:17.407068+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:18.407227+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:19.407396+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:20.407563+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:21.407726+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:22.407868+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb3c45c00 session 0x564fb60d03c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:23.408049+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:24.408203+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb322d400 session 0x564fb32d92c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb403dc00 session 0x564fb4f930e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:25.408363+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:26.408526+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:27.408653+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:28.408773+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:29.408929+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:30.409104+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948238 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:31.409200+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:32.409350+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.340190887s of 53.341789246s, submitted: 1
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:33.409479+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:34.409605+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:35.409719+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948502 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:36.409857+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:37.409992+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:38.410135+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb501a800 session 0x564fb6443a40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:39.410241+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:40.410381+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950014 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:41.410515+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:42.410688+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:43.410887+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:44.411068+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:45.411229+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950014 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:46.411392+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:47.411523+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:48.411673+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.570524216s of 15.574358940s, submitted: 3
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:49.411805+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:50.411936+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950014 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:51.412063+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:52.412227+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:53.412338+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:54.412481+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:55.412659+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 3072000 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951394 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:56.412785+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:57.412943+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:58.413109+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.126813889s of 10.132299423s, submitted: 4
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:49:59.413256+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:00.413391+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950803 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:01.413507+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:02.413680+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:03.413823+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:04.413953+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 3063808 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:05.414122+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950671 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:06.414247+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:07.414355+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:08.414453+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:09.414564+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:10.414679+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950671 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:11.414820+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:12.415007+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:13.415159+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:14.415330+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:15.415490+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950671 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:16.415620+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:17.415731+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:18.415862+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:19.415967+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:20.416067+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950671 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:21.416177+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 ms_handle_reset con 0x564fb403d800 session 0x564fb54bc5a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:22.416286+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:23.416383+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:24.416503+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:25.416620+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 3055616 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950671 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:26.416751+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:27.416886+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:28.417029+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:29.417173+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:30.417310+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950671 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:31.417428+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:32.417606+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.853954315s of 33.855854034s, submitted: 2
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:33.417783+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:34.417878+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 3047424 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:35.418004+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952315 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:36.418132+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:37.418253+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:38.418413+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:39.418529+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:40.418664+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953827 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:41.418807+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:42.418942+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:43.419053+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:44.419198+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:45.419356+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953236 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:46.419489+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:47.419645+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:48.419794+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:49.420319+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.244781494s of 17.250627518s, submitted: 4
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:50.420496+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953104 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:51.420606+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:52.420785+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:53.420942+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:54.421084+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:55.421201+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953104 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:56.421362+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:57.421460+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:58.421574+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:50:59.421725+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:00.421873+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953104 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:01.422029+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:02.422191+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:03.422327+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:04.422465+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:05.422602+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953104 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:06.422742+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:07.422886+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:08.423026+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:09.423143+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:10.423250+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953104 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:11.423343+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:12.423465+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3031040 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:13.423591+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:14.423785+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:15.423946+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953104 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:16.424082+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:17.424252+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:18.424342+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0xf7b60/0x19d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:19.424445+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.797666550s of 29.798833847s, submitted: 1
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3022848 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 137 ms_handle_reset con 0x564fb403d800 session 0x564fb55dd680
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 137 ms_handle_reset con 0x564fb322d400 session 0x564fb61274a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:20.424546+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 2990080 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961785 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:21.424642+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 139 ms_handle_reset con 0x564fb403dc00 session 0x564fb3ccfc20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fc667000/0x0/0x4ffc00000, data 0xfbdaf/0x1a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 2973696 heap: 89088000 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:22.424757+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fc667000/0x0/0x4ffc00000, data 0xfbdaf/0x1a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 140 ms_handle_reset con 0x564fb501a800 session 0x564fb55ade00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x8fdeda/0x9a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87228416 unmapped: 18644992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:23.424954+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87244800 unmapped: 18628608 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:24.425110+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:25.425246+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027532 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:26.425348+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:27.425463+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:28.425581+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:29.425690+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:30.425814+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 ms_handle_reset con 0x564fb501a400 session 0x564fb345fa40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 ms_handle_reset con 0x564fb403c400 session 0x564fb60d10e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.923460960s of 10.963563919s, submitted: 66
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027664 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:31.425945+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:32.426057+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:33.426192+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:34.426251+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:35.426378+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027664 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:36.426481+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:37.426607+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:38.426712+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:39.426816+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 18612224 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:40.426975+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5a000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.443756104s of 10.445528984s, submitted: 1
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024008 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:41.427143+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:42.427326+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:43.427463+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:44.427577+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5e000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 ms_handle_reset con 0x564fb3c45c00 session 0x564fb663c3c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 ms_handle_reset con 0x564fb501b800 session 0x564fb60d12c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:45.427707+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024008 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:46.427814+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 88317952 unmapped: 17555456 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:47.427976+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:48.428115+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:49.428219+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5e000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:50.428312+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024797 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:51.428408+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:52.428553+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5e000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:53.428677+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:54.428781+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:55.428908+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.355128288s of 14.359399796s, submitted: 4
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024929 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:56.428994+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 18604032 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:57.429099+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5e000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 18595840 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 8934 writes, 34K keys, 8934 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 8934 writes, 2144 syncs, 4.17 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 899 writes, 1569 keys, 899 commit groups, 1.0 writes per commit group, ingest: 0.68 MB, 0.00 MB/s
                                           Interval WAL: 899 writes, 431 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb19209b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564fb1921350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:58.429238+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 18563072 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:51:59.430004+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbe5e000/0x0/0x4ffc00000, data 0x901fb4/0x9ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 18563072 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:00.430150+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 18554880 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028563 data_alloc: 218103808 data_used: 163840
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:01.430254+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _renew_subs
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 143 ms_handle_reset con 0x564fb3c45800 session 0x564fb5e84000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 17465344 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:02.430374+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 17465344 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:03.430485+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 17465344 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:04.430649+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 143 ms_handle_reset con 0x564fb6009400 session 0x564fb5e843c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 17457152 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb827000/0x0/0x4ffc00000, data 0xf351e0/0xfe3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:05.430753+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 90062848 unmapped: 15810560 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094708 data_alloc: 218103808 data_used: 1814528
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:06.430848+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 11427840 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:07.430934+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.036621094s of 12.066541672s, submitted: 33
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 11476992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:08.431032+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94396416 unmapped: 11476992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:09.431139+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb825000/0x0/0x4ffc00000, data 0xf371b2/0xfe6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 11567104 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:10.431236+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 11567104 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129367 data_alloc: 218103808 data_used: 6565888
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:11.431346+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 11567104 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:12.431492+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 11567104 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:13.431587+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 11567104 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:14.431684+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb825000/0x0/0x4ffc00000, data 0xf371b2/0xfe6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 94322688 unmapped: 11550720 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:15.431791+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102203392 unmapped: 3670016 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188577 data_alloc: 218103808 data_used: 7290880
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:16.431907+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb825000/0x0/0x4ffc00000, data 0xf371b2/0xfe6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104456192 unmapped: 1417216 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:17.431999+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104456192 unmapped: 1417216 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:18.432114+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104456192 unmapped: 1417216 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:19.432777+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 1384448 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:20.432925+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 1253376 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200525 data_alloc: 218103808 data_used: 7475200
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:21.433025+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 1253376 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:22.433216+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 1253376 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:23.433880+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 1253376 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:24.434006+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:25.434120+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201133 data_alloc: 218103808 data_used: 7536640
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:26.434234+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:27.434355+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:28.434449+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:29.434552+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:30.434706+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201133 data_alloc: 218103808 data_used: 7536640
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:31.434822+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:32.434938+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ef2000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:33.435044+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb56552c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403c400 session 0x564fb3d7f680
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b800 session 0x564fb32d81e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:34.435191+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb5e84b40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1236992 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.712247849s of 27.780412674s, submitted: 108
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:35.435280+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009800 session 0x564fb64592c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb54bc960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403c400 session 0x564fb4f94d20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b800 session 0x564fb3cd2d20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb32d4780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 18677760 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275793 data_alloc: 218103808 data_used: 7536640
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:36.435408+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93a0000/0x0/0x4ffc00000, data 0x221c1c2/0x22cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 18677760 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:37.435508+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93a0000/0x0/0x4ffc00000, data 0x221c1c2/0x22cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 18677760 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:38.435625+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 18677760 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:39.435733+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 18677760 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:40.435854+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6008400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 18677760 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275793 data_alloc: 218103808 data_used: 7536640
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:41.435956+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 12337152 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:42.436068+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93a0000/0x0/0x4ffc00000, data 0x221c1c2/0x22cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:43.436172+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:44.436317+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:45.436496+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355745 data_alloc: 234881024 data_used: 19369984
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:46.436656+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.959013939s of 11.979373932s, submitted: 12
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:47.437015+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93a0000/0x0/0x4ffc00000, data 0x221c1c2/0x22cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:48.437192+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:49.437341+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 7585792 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:50.437487+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 7520256 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356017 data_alloc: 234881024 data_used: 19394560
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:51.437585+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8a47000/0x0/0x4ffc00000, data 0x2b671c2/0x2c17000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 2236416 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:52.438633+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f89ff000/0x0/0x4ffc00000, data 0x2b961c2/0x2c46000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 2023424 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:53.438722+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 2023424 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:54.438852+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f89ff000/0x0/0x4ffc00000, data 0x2b961c2/0x2c46000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 1990656 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:55.438972+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 1990656 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1440081 data_alloc: 234881024 data_used: 20099072
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:56.439099+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 1990656 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.975911140s of 10.039819717s, submitted: 109
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:57.439230+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 4210688 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:58.439362+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8a23000/0x0/0x4ffc00000, data 0x2b991c2/0x2c49000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 4210688 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:52:59.439495+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 4210688 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:00.439615+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 4210688 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430841 data_alloc: 234881024 data_used: 20099072
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:01.439710+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8a23000/0x0/0x4ffc00000, data 0x2b991c2/0x2c49000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6008400 session 0x564fb5654780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 4210688 heap: 121110528 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:02.439870+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8a23000/0x0/0x4ffc00000, data 0x2b991c2/0x2c49000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb54bcd20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 13402112 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:03.439977+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 13402112 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:04.440081+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 13402112 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:05.440196+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 13402112 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201687 data_alloc: 218103808 data_used: 7536640
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:06.440322+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 13402112 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:07.440487+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45c00 session 0x564fb6668f00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.509199142s of 10.522704124s, submitted: 30
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009400 session 0x564fb55eab40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403c400 session 0x564fb5e84f00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103309312 unmapped: 18849792 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:08.440590+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f0f000/0x0/0x4ffc00000, data 0x16ae1b2/0x175d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:09.440706+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:10.440831+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052351 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:11.440994+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:12.441126+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:13.441271+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:14.441391+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:15.441560+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052351 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:16.441720+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:17.441857+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:18.441985+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:19.442122+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:20.442248+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb3d7e000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb322d400 session 0x564fb341de00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:21.442342+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052351 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:22.442456+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:23.442859+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 19447808 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:24.443026+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 19439616 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:25.443150+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 19439616 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:26.443310+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 19439616 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052351 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:27.443460+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 19439616 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb5e854a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb64434a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45c00 session 0x564fb5e85c20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:28.443541+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 19439616 heap: 122159104 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403c400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403c400 session 0x564fb5e85e00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.717388153s of 20.917802811s, submitted: 378
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb322d400 session 0x564fb30af4a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb30ae1e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45c00 session 0x564fb51423c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb5143860
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009400 session 0x564fb341c780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:29.443709+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 20619264 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:30.443816+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 20619264 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:31.443986+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 20619264 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066393 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:32.444135+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 20611072 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fab9a000/0x0/0x4ffc00000, data 0xa221c2/0xad2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:33.444266+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 20611072 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:34.444408+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 20611072 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:35.444572+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 20611072 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:36.444751+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 20324352 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067505 data_alloc: 218103808 data_used: 200704
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:37.444912+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 20324352 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fab9a000/0x0/0x4ffc00000, data 0xa221c2/0xad2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:38.445058+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 20324352 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:39.445212+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 20324352 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.470705032s of 11.479301453s, submitted: 10
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:40.445345+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 20324352 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:41.445475+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 20324352 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066914 data_alloc: 218103808 data_used: 200704
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fab9a000/0x0/0x4ffc00000, data 0xa221c2/0xad2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:42.445653+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 20316160 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:43.445778+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 20316160 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fab9a000/0x0/0x4ffc00000, data 0xa221c2/0xad2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:44.445935+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 20316160 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:45.446071+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 18931712 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:46.446191+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098920 data_alloc: 218103808 data_used: 331776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:47.446388+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:48.446519+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:49.446615+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:50.446760+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:51.446869+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098920 data_alloc: 218103808 data_used: 331776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:52.447044+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:53.447214+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:54.447338+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:55.447481+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:56.447585+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098920 data_alloc: 218103808 data_used: 331776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a400 session 0x564fb65510e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb663dc20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:57.447729+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 19505152 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:58.447935+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:53:59.448066+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:00.448162+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:01.448287+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098920 data_alloc: 218103808 data_used: 331776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:02.448466+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:03.448599+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:04.448731+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 19496960 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:05.448861+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 19488768 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:06.448992+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 19488768 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098920 data_alloc: 218103808 data_used: 331776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa803000/0x0/0x4ffc00000, data 0xdaa1c2/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.047389984s of 27.079875946s, submitted: 43
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:07.449106+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 19488768 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb3cd3c20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb538ed20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:08.449256+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 20299776 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:09.449396+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 20299776 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:10.449505+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 20283392 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:11.449659+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 20283392 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058782 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:12.449816+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 20283392 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:13.449922+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 20283392 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:14.450062+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 20275200 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:15.450256+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 20275200 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:16.450366+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 20275200 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059703 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:17.450518+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 20275200 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:18.450640+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 20275200 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:19.450785+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 20275200 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:20.450942+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.764374733s of 13.775873184s, submitted: 10
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:21.451075+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059571 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:22.451238+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:23.451372+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:24.451485+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:25.451590+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:26.451685+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102940672 unmapped: 20267008 heap: 123207680 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059571 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6008400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6008400 session 0x564fb64581e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb5aff2c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb3e174a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4facb5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb4fe9c20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a400 session 0x564fb51674a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:27.451789+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:28.451935+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:29.452092+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xb181b2/0xbc7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:30.452272+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xb181b2/0xbc7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:31.452434+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075612 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:32.452617+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:33.452735+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.655667305s of 12.667624474s, submitted: 12
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:34.452935+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:35.453093+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xb181b2/0xbc7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 24576000 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:36.453230+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090336 data_alloc: 218103808 data_used: 2330624
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:37.453346+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:38.453477+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:39.453609+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xb181b2/0xbc7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:40.453740+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:41.453848+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090336 data_alloc: 218103808 data_used: 2330624
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:42.454038+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 24567808 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:43.454196+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xb181b2/0xbc7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.119963646s of 10.137836456s, submitted: 20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:44.454319+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:45.454439+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:46.454609+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119484 data_alloc: 218103808 data_used: 2330624
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:47.454740+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:48.454929+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:49.455064+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:50.455164+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:51.455264+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119484 data_alloc: 218103808 data_used: 2330624
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:52.455384+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b800 session 0x564fb3cf9c20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45c00 session 0x564fb32d63c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:53.455523+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:54.455644+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:55.455766+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:56.455952+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119484 data_alloc: 218103808 data_used: 2330624
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:57.456120+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:58.456232+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:54:59.456356+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6eb000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:00.456480+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104398848 unmapped: 23142400 heap: 127541248 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45c00 session 0x564fb32d4d20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb32d4780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb66e9e00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb66e8780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.249073029s of 17.250936508s, submitted: 2
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a400 session 0x564fb66e8960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a400 session 0x564fb6550f00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb6551a40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45c00 session 0x564fb6550b40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403d800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403d800 session 0x564fb6550d20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:01.456614+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182469 data_alloc: 218103808 data_used: 2330624
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:02.456769+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x16d4224/0x1785000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:03.456882+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:04.457059+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:05.457243+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6635c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:06.457369+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183145 data_alloc: 218103808 data_used: 2330624
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:07.457480+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:08.457954+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x16d4224/0x1785000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb60efc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:09.458066+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 33587200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:10.458181+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:11.458295+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240145 data_alloc: 234881024 data_used: 10731520
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x16d4224/0x1785000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.196850777s of 11.221287727s, submitted: 28
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:12.458419+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x16d4224/0x1785000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:13.458556+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:14.458668+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:15.458767+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:16.458866+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238963 data_alloc: 234881024 data_used: 10731520
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb60efc00 session 0x564fb55ac780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb6550000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:17.458933+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:18.459042+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x16d4224/0x1785000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110010368 unmapped: 29081600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:19.459144+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9049000/0x0/0x4ffc00000, data 0x2162224/0x2213000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111943680 unmapped: 27148288 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:20.459257+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112197632 unmapped: 26894336 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:21.459426+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112205824 unmapped: 26886144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338937 data_alloc: 234881024 data_used: 11501568
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9017000/0x0/0x4ffc00000, data 0x2194224/0x2245000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:22.459551+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112205824 unmapped: 26886144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:23.459683+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112205824 unmapped: 26886144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:24.459818+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112205824 unmapped: 26886144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.927748680s of 13.000759125s, submitted: 104
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:25.459913+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9017000/0x0/0x4ffc00000, data 0x2194224/0x2245000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:26.460005+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 27246592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334049 data_alloc: 234881024 data_used: 11505664
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662a000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:27.460108+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9014000/0x0/0x4ffc00000, data 0x2197224/0x2248000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111853568 unmapped: 27238400 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb4f95860
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662c400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:28.460221+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662c400 session 0x564fb5167680
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 34783232 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:29.460348+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 34783232 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:30.460494+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa2da000/0x0/0x4ffc00000, data 0xed21b2/0xf81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 34783232 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb3d7e5a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:31.460611+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb8718000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb8718000 session 0x564fb55ade00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079129 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:32.460765+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:33.460926+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:34.461048+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:35.461213+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:36.461344+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.006490707s of 11.051873207s, submitted: 52
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078538 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:37.461509+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:38.461681+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:39.461777+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:40.461928+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:41.462044+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078538 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:42.462187+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:43.462274+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:44.462440+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:45.462573+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:46.462705+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078406 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:47.462825+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 36634624 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:48.462971+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:49.463105+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:50.463231+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:51.463612+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078406 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:52.463724+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:53.463851+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:54.463984+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 36626432 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.814508438s of 18.817729950s, submitted: 2
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb6443a40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb4f94960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662c400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662c400 session 0x564fb5affc20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb55eb680
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:55.464071+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb8718400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb8718400 session 0x564fb341d680
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102531072 unmapped: 36560896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:56.464227+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 36552704 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109100 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa591000/0x0/0x4ffc00000, data 0xc1b214/0xccb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:57.464376+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb5034d20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 36552704 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb3cd01e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:58.464498+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662c400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662c400 session 0x564fb5143c20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 36519936 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb4febe00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb8719000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6008400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:55:59.464608+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 36519936 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:00.464816+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103391232 unmapped: 35700736 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa58f000/0x0/0x4ffc00000, data 0xc1b247/0xccd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:01.464920+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103391232 unmapped: 35700736 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133315 data_alloc: 218103808 data_used: 3379200
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:02.465039+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103391232 unmapped: 35700736 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:03.465180+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103391232 unmapped: 35700736 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:04.465773+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 35692544 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa58f000/0x0/0x4ffc00000, data 0xc1b247/0xccd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:05.465923+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 35692544 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:06.466057+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 35692544 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133315 data_alloc: 218103808 data_used: 3379200
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa58f000/0x0/0x4ffc00000, data 0xc1b247/0xccd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:07.466191+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 35692544 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa58f000/0x0/0x4ffc00000, data 0xc1b247/0xccd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:08.466302+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 35692544 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.611018181s of 13.643602371s, submitted: 41
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:09.466426+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 30982144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c37000/0x0/0x4ffc00000, data 0x1573247/0x1625000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:10.466504+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c37000/0x0/0x4ffc00000, data 0x1573247/0x1625000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 30982144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:11.467090+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 30982144 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219309 data_alloc: 218103808 data_used: 4100096
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:12.467212+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108126208 unmapped: 30965760 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:13.467362+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108126208 unmapped: 30965760 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c37000/0x0/0x4ffc00000, data 0x1573247/0x1625000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:14.467506+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108126208 unmapped: 30965760 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:15.467652+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108126208 unmapped: 30965760 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:16.467792+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 31555584 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215741 data_alloc: 218103808 data_used: 4104192
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:17.467910+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 31555584 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:18.468001+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 31555584 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:19.468095+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c35000/0x0/0x4ffc00000, data 0x1575247/0x1627000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 31555584 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:20.468201+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 31555584 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:21.468299+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c35000/0x0/0x4ffc00000, data 0x1575247/0x1627000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.835337639s of 12.891619682s, submitted: 84
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215965 data_alloc: 218103808 data_used: 4104192
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:22.468468+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:23.468587+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c34000/0x0/0x4ffc00000, data 0x1576247/0x1628000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c34000/0x0/0x4ffc00000, data 0x1576247/0x1628000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:24.468705+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:25.468862+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:26.469029+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215965 data_alloc: 218103808 data_used: 4104192
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:27.469195+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31547392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:28.469338+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c32000/0x0/0x4ffc00000, data 0x1578247/0x162a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 31539200 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:29.469460+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009800 session 0x564fb32d41e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 31137792 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:30.469564+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 31129600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:31.469666+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9013000/0x0/0x4ffc00000, data 0x2197247/0x2249000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 31129600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304295 data_alloc: 218103808 data_used: 4104192
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:32.469780+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6635c00 session 0x564fb4f94b40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb322d400 session 0x564fb341cd20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 31129600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:33.469960+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 31129600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:34.470070+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 31129600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:35.470175+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 31129600 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:36.470281+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 22093824 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389567 data_alloc: 234881024 data_used: 16691200
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:37.470372+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9013000/0x0/0x4ffc00000, data 0x2197247/0x2249000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 22085632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:38.470460+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 22085632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:39.470555+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9013000/0x0/0x4ffc00000, data 0x2197247/0x2249000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 22052864 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:40.470703+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.671489716s of 18.688638687s, submitted: 13
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 22011904 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:41.470857+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 22011904 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389871 data_alloc: 234881024 data_used: 16691200
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:42.470997+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9011000/0x0/0x4ffc00000, data 0x2198247/0x224a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 118161408 unmapped: 20930560 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:43.471124+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 21962752 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:44.471279+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 118243328 unmapped: 20848640 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:45.471444+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8b00000/0x0/0x4ffc00000, data 0x26aa247/0x275c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120676352 unmapped: 18415616 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:46.471578+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120676352 unmapped: 18415616 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435597 data_alloc: 234881024 data_used: 17227776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:47.471728+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120676352 unmapped: 18415616 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:48.471881+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18407424 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8af0000/0x0/0x4ffc00000, data 0x26ba247/0x276c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:49.472010+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18407424 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:50.472138+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18407424 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:51.472225+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18407424 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435597 data_alloc: 234881024 data_used: 17227776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:52.472329+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8af0000/0x0/0x4ffc00000, data 0x26ba247/0x276c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 18399232 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:53.472424+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.152483940s of 13.195999146s, submitted: 58
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 18374656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:54.472600+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 18374656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:55.472757+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119939072 unmapped: 19152896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:56.472927+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119939072 unmapped: 19152896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431246 data_alloc: 234881024 data_used: 17227776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:57.473034+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aed000/0x0/0x4ffc00000, data 0x26bb247/0x276d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119939072 unmapped: 19152896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aed000/0x0/0x4ffc00000, data 0x26bb247/0x276d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:58.473126+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119939072 unmapped: 19152896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:56:59.473304+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119939072 unmapped: 19152896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:00.473437+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aed000/0x0/0x4ffc00000, data 0x26bb247/0x276d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119963648 unmapped: 19128320 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:01.473553+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119971840 unmapped: 19120128 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430778 data_alloc: 234881024 data_used: 17227776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aef000/0x0/0x4ffc00000, data 0x26bb247/0x276d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:02.473648+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119971840 unmapped: 19120128 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:03.473736+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119971840 unmapped: 19120128 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:04.473862+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.138096809s of 11.145630836s, submitted: 6
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 19111936 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aef000/0x0/0x4ffc00000, data 0x26bb247/0x276d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:05.473966+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aee000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 19111936 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:06.474062+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 19111936 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430970 data_alloc: 234881024 data_used: 17227776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:07.474161+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 19111936 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:08.474255+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 19103744 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:09.474326+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 19103744 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:10.474458+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 19103744 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:11.474597+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aec000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 19079168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431122 data_alloc: 234881024 data_used: 17227776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:12.474710+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 19079168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:13.474841+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 19079168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:14.475016+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aec000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 19079168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:15.475180+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aec000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 19079168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.510084152s of 11.513150215s, submitted: 3
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:16.475423+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431122 data_alloc: 234881024 data_used: 17227776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aee000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:17.475521+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:18.475637+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aee000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:19.475803+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aee000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:20.475972+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aee000/0x0/0x4ffc00000, data 0x26bc247/0x276e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:21.476073+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431290 data_alloc: 234881024 data_used: 17227776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:22.476183+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:23.476323+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 19062784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:24.476461+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 19054592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:25.476560+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 19054592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:26.476683+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26bd247/0x276f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 19054592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430978 data_alloc: 234881024 data_used: 17227776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:27.476792+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 19054592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:28.476930+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.267045021s of 12.273044586s, submitted: 5
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 19030016 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:29.477098+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 19030016 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26bd247/0x276f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:30.477241+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26bd247/0x276f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 19021824 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:31.477353+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 19021824 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431146 data_alloc: 234881024 data_used: 17227776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:32.477458+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:33.477582+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26bd247/0x276f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:34.477727+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:35.477881+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:36.478015+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431266 data_alloc: 234881024 data_used: 17227776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:37.478127+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26be247/0x2770000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:38.478217+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:39.478379+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.515620232s of 11.520205498s, submitted: 3
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:40.478472+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:41.478576+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26be247/0x2770000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431282 data_alloc: 234881024 data_used: 17227776
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:42.478716+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:43.478818+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 19013632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:44.478976+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x26be247/0x2770000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 19005440 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:45.479076+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 120111104 unmapped: 18980864 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:46.479175+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb30ae960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aec000/0x0/0x4ffc00000, data 0x26be247/0x2770000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662c400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662c400 session 0x564fb32d72c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 27246592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223518 data_alloc: 218103808 data_used: 4104192
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8aec000/0x0/0x4ffc00000, data 0x26be247/0x2770000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:47.479275+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 27246592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:48.479415+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 27246592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:49.479967+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 27246592 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:50.480109+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.488348007s of 10.496168137s, submitted: 12
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 27942912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:51.480202+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x157e247/0x1630000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 27942912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223534 data_alloc: 218103808 data_used: 4104192
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:52.480329+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb8719000 session 0x564fb64425a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6008400 session 0x564fb55eaf00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb322d400 session 0x564fb55ad4a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c2c000/0x0/0x4ffc00000, data 0x157e247/0x1630000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:53.480474+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:54.480651+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:55.480808+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:56.480940+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098915 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:57.481040+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:58.481170+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:57:59.481272+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb3ccf4a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009800 session 0x564fb663c780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:00.481370+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:01.481455+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098915 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:02.481567+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662a000 session 0x564fb4207860
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:03.481689+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:04.481852+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:05.481966+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:06.482093+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098915 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:07.482255+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:08.482390+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:09.482567+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:10.482787+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:11.482973+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa66a000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 31285248 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098915 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:12.483197+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb322d400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.798551559s of 21.831386566s, submitted: 48
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb322d400 session 0x564fb51661e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6008400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6008400 session 0x564fb4fe94a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009800 session 0x564fb32d4d20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6009c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6009c00 session 0x564fb30ae960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb3cf9a40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 31047680 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:13.483358+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:14.483514+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:15.483676+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:16.483840+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130144 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:17.484004+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:18.484144+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:19.484275+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 31064064 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:20.484395+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 31137792 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:21.484531+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 31137792 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151252 data_alloc: 218103808 data_used: 3313664
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:22.484684+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 31137792 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:23.484832+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 31137792 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:24.484982+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.434982300s of 12.449465752s, submitted: 17
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb3cd2960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 31137792 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb8719000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb8719000 session 0x564fb55ac780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:25.485148+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa491000/0x0/0x4ffc00000, data 0xd1c1b2/0xdcb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:26.485304+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101268 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:27.485420+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:28.485549+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:29.485709+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:30.485841+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:31.485984+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101004 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:32.486145+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb6550f00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb6550d20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb32d4960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb32d41e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb32d4d20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb3cf83c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb3cef0e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb8719000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb8719000 session 0x564fb538f4a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb538e1e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb66685a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b400 session 0x564fb55dd680
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:33.486260+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:34.486396+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:35.486534+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 32710656 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa698000/0x0/0x4ffc00000, data 0xb141c2/0xbc4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:36.486656+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb538e780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb403dc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb403dc00 session 0x564fb42061e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 32784384 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120682 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:37.486762+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 32784384 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:38.486917+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb4207860
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.719092369s of 13.742123604s, submitted: 25
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb4206960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 32448512 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:39.487036+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 32448512 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb381f5/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:40.487145+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 32448512 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:41.487263+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 32448512 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141881 data_alloc: 218103808 data_used: 2322432
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:42.487390+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 32448512 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:43.487508+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb55eaf00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b400 session 0x564fb55ac3c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb381f5/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb42074a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa672000/0x0/0x4ffc00000, data 0xb381f5/0xbea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:44.487616+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:45.487733+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:46.487841+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107044 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:47.487944+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:48.488041+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:49.488143+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:50.488243+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:51.488350+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107044 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:52.488489+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:53.488624+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:54.488758+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:55.488848+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:56.488940+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107044 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:57.489042+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:58.489133+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:58:59.489249+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:00.489352+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.812772751s of 21.829757690s, submitted: 23
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:01.489458+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106912 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:02.489564+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb538e1e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb538e780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb54bde00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 33284096 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb3cef0e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:03.489656+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b400 session 0x564fb3cf9a40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b400 session 0x564fb3cf8960
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb3ccfc20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb4fe83c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb3ccf4a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 33005568 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:04.489750+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0xda81c1/0xe58000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 33005568 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:05.489875+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 33005568 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:06.489928+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb51430e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 33005568 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146351 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:07.490016+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb51432c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0xda81c1/0xe58000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb5142f00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c45800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 33005568 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c45800 session 0x564fb5143c20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:08.490150+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106102784 unmapped: 32989184 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:09.490242+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0xda81f4/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:10.490334+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:11.490432+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184203 data_alloc: 218103808 data_used: 4931584
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:12.490549+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:13.490711+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:14.491026+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:15.491235+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0xda81f4/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:16.491625+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184203 data_alloc: 218103808 data_used: 4931584
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:17.491721+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0xda81f4/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 32464896 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:18.491832+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.024320602s of 18.049039841s, submitted: 28
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 27426816 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:19.491938+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:20.492034+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:21.492192+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ed0000/0x0/0x4ffc00000, data 0x12d21f4/0x1384000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ed0000/0x0/0x4ffc00000, data 0x12d21f4/0x1384000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235909 data_alloc: 218103808 data_used: 5124096
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:22.492310+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:23.492425+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:24.492554+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:25.492686+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ed0000/0x0/0x4ffc00000, data 0x12d21f4/0x1384000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:26.492775+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231621 data_alloc: 218103808 data_used: 5124096
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:27.492883+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ed5000/0x0/0x4ffc00000, data 0x12d51f4/0x1387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ed5000/0x0/0x4ffc00000, data 0x12d51f4/0x1387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:28.493028+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:29.493119+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.634371758s of 11.680276871s, submitted: 63
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:30.493270+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb5142d20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501b400 session 0x564fb5e841e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb66e8780
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:31.493437+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:32.493588+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:33.493727+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:34.493868+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:35.494005+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:36.494127+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:37.494252+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:38.494390+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:39.494843+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:40.495012+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:41.495152+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:42.495334+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:43.495754+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:44.495883+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:45.496034+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:46.496195+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:47.496355+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:48.496459+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread fragmentation_score=0.000274 took=0.000032s
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:49.496558+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:50.496653+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:51.496755+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:52.496887+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:53.497035+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:54.497201+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:55.497302+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:56.497411+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:57.497571+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:58.497664+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T09:59:59.497817+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:00.497977+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:01.498064+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:02.498195+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:03.498324+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:04.498454+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:05.498573+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:06.498742+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:07.498868+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 30629888 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119345 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.998508453s of 38.033313751s, submitted: 53
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb55acf00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:08.498943+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108748800 unmapped: 30343168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:09.499050+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 108748800 unmapped: 30343168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb6634400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c1000/0x0/0x4ffc00000, data 0xdec1b2/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:10.499186+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 28655616 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:11.499300+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 28557312 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e4f000/0x0/0x4ffc00000, data 0x135e1b2/0x140d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c44c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:12.499435+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 28557312 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201943 data_alloc: 218103808 data_used: 172032
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:13.499587+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:14.499748+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:15.499923+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:16.500064+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e4f000/0x0/0x4ffc00000, data 0x135e1b2/0x140d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:17.500164+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228351 data_alloc: 218103808 data_used: 4169728
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e4d000/0x0/0x4ffc00000, data 0x13601b2/0x140f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:18.500300+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:19.500444+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:20.500578+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e4d000/0x0/0x4ffc00000, data 0x13601b2/0x140f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:21.500706+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 28147712 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.297528267s of 13.345145226s, submitted: 56
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:22.500821+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 26140672 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265043 data_alloc: 218103808 data_used: 4227072
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:23.500947+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 26140672 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:24.501075+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 26140672 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:25.501200+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 26140672 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:26.501320+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 26140672 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b65000/0x0/0x4ffc00000, data 0x16391b2/0x16e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:27.501454+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 26140672 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265267 data_alloc: 218103808 data_used: 4227072
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:28.501583+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:29.501734+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:30.501874+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b72000/0x0/0x4ffc00000, data 0x163b1b2/0x16ea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:31.502037+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b72000/0x0/0x4ffc00000, data 0x163b1b2/0x16ea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:32.502150+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259899 data_alloc: 218103808 data_used: 4227072
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:33.502309+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.009808540s of 12.043086052s, submitted: 54
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:34.502486+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b71000/0x0/0x4ffc00000, data 0x163c1b2/0x16eb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:35.502661+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:36.502798+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b71000/0x0/0x4ffc00000, data 0x163c1b2/0x16eb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:37.502944+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260123 data_alloc: 218103808 data_used: 4227072
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:38.503090+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c44c00 session 0x564fb52dc1e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 26181632 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662cc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662cc00 session 0x564fb3e174a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b71000/0x0/0x4ffc00000, data 0x163c1b2/0x16eb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:39.503257+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 27271168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:40.503352+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 27271168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:41.503477+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 27271168 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb6634400 session 0x564fb54bda40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb3cd3860
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:42.503627+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130238 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:43.503763+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:44.503916+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa334000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:45.504050+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:46.504155+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:47.504281+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130238 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:48.504397+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:49.504498+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa334000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:50.504602+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:51.504728+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:52.504833+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130238 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa334000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:53.504936+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:54.505047+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa334000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:55.505140+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:56.505236+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa334000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:57.505358+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130238 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:58.505454+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:00:59.505569+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 27254784 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa334000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c44c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.766538620s of 26.777446747s, submitted: 18
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:00.505668+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c44c00 session 0x564fb55ea000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0xd9e1b2/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:01.505808+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:02.506012+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166848 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:03.506115+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a800 session 0x564fb5afeb40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:04.506217+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662cc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb662cc00 session 0x564fb32d7c20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:05.506330+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0xd9e1b2/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb64bc800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb64bc800 session 0x564fb54bd0e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb64bc800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb64bc800 session 0x564fb55ddc20
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:06.506463+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 27156480 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3b7b400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:07.506561+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 27230208 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170648 data_alloc: 218103808 data_used: 696320
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:08.506676+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:09.506796+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0xd9e1b2/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:10.506925+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:11.507025+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:12.507191+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191776 data_alloc: 218103808 data_used: 3842048
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:13.507334+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:14.507461+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:15.507582+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 25894912 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0xd9e1b2/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:16.507717+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.093719482s of 16.104257584s, submitted: 10
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa391000/0x0/0x4ffc00000, data 0xe1c1b2/0xecb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [1,4])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 23453696 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:17.507839+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245506 data_alloc: 218103808 data_used: 4386816
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:18.508029+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:19.508162+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e22000/0x0/0x4ffc00000, data 0x138a1b2/0x1439000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:20.508321+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:21.508471+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:22.508627+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245506 data_alloc: 218103808 data_used: 4386816
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:23.508758+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e22000/0x0/0x4ffc00000, data 0x138a1b2/0x1439000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:24.508918+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:25.509031+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:26.509168+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:27.509272+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245506 data_alloc: 218103808 data_used: 4386816
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e22000/0x0/0x4ffc00000, data 0x138a1b2/0x1439000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:28.509380+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 24420352 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3b7b400 session 0x564fb55eb4a0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb3c44c00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.580277443s of 12.613999367s, submitted: 56
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb3c44c00 session 0x564fb5bc01e0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:29.509531+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:30.509661+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:31.509811+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:32.509946+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:33.510123+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:34.510257+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:35.510439+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:36.510619+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:37.510731+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:38.510887+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:39.511068+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 27844608 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:40.511171+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:41.511305+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:42.511541+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:43.511676+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:44.511816+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:45.511957+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:46.512157+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:47.512350+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:48.512505+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:49.512673+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:50.512840+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:51.512993+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:52.513184+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:53.513335+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:54.513521+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:55.513637+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 27836416 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:56.513787+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:57.513936+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3470 syncs, 3.43 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2964 writes, 9767 keys, 2964 commit groups, 1.0 writes per commit group, ingest: 10.40 MB, 0.02 MB/s
                                           Interval WAL: 2964 writes, 1326 syncs, 2.24 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:58.514055+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:01:59.514166+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:00.514276+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:01.514383+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:02.514559+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:03.514738+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:04.514881+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:05.515084+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:06.515229+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:07.515384+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:08.515538+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:09.515664+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:10.515782+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:11.515929+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 27828224 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:12.516088+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:13.516234+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:14.516408+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:15.516530+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:16.516639+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:17.516775+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:18.516920+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:19.517122+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:20.517333+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:21.517428+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:22.517543+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:23.517665+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:24.517762+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:25.517861+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 27820032 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:26.517941+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 27811840 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:27.518038+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 27811840 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:28.518141+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 27811840 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:29.518248+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 27811840 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:30.518353+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 27811840 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:31.518451+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 27811840 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:32.518576+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'config diff' '{prefix=config diff}'
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'config show' '{prefix=config show}'
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 27451392 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:33.518674+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 27762688 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:34.518780+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 27762688 heap: 139091968 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'log dump' '{prefix=log dump}'
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:35.518882+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'perf dump' '{prefix=perf dump}'
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'perf schema' '{prefix=perf schema}'
Nov 25 10:11:17 compute-0 ceph-osd[82261]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 39329792 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:36.519013+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 39321600 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:37.519313+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 39321600 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:38.519419+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 39321600 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:39.519522+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 39321600 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:40.519627+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 39321600 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:41.519736+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 39321600 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:42.519861+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 39321600 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:43.519961+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 39321600 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:44.520065+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:45.520200+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:46.520301+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:47.520403+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:48.520512+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:49.520610+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:50.520708+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:51.520821+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:52.520950+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:53.521055+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:54.521934+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:55.523126+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:56.523219+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:57.523317+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:58.523424+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:02:59.523558+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 39313408 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb4219800 session 0x564fb3cef2c0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb501a800
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:00.523655+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb4219400 session 0x564fb4206f00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb662cc00
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 39305216 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb4063000 session 0x564fb3ceeb40
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb4219400
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:01.523745+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 39305216 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:02.523877+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 39305216 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:03.523932+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 39305216 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:04.524033+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 39297024 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:05.524148+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 39297024 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:06.524280+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 39297024 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:07.524377+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 98.783828735s of 98.789970398s, submitted: 11
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 39288832 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138685 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:08.524505+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 39280640 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:09.524593+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 39051264 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:10.525182+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 39043072 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:11.525297+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 39043072 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:12.525449+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 39043072 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:13.525558+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 39043072 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:14.525692+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 39043072 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:15.525790+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 39043072 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:16.525879+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 39034880 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:17.525919+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 39034880 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:18.526000+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 39034880 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:19.526133+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 39034880 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:20.526250+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 39034880 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:21.526355+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 39034880 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:22.526484+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 39034880 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:23.526622+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 39034880 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:24.526722+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111108096 unmapped: 39026688 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:25.526835+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 39018496 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:26.526933+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 39018496 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:27.527026+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 39018496 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:28.527135+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 39018496 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:29.527261+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 39018496 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:30.527380+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 39018496 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:31.527522+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111116288 unmapped: 39018496 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:32.527646+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 ms_handle_reset con 0x564fb501a000 session 0x564fb3f95860
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: handle_auth_request added challenge on 0x564fb4063000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 39010304 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:33.527748+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 39010304 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:34.527857+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 39010304 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:35.527934+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 39010304 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:36.528036+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 39010304 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:37.528142+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 39010304 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:38.528245+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 39010304 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:39.528358+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 39010304 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:40.528503+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 39002112 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:41.528632+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 39002112 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:42.528752+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 39002112 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:43.528846+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 39002112 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:44.528965+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 39002112 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:45.529053+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 39002112 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:46.529197+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 38993920 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:47.529301+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 38993920 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:48.529403+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 38985728 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:49.529493+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 38985728 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:50.529623+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 38985728 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:51.529724+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 38985728 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:52.529886+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 38985728 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:53.530005+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 38985728 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:54.530165+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 38985728 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:55.530308+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 38985728 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:56.530414+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:57.530524+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:58.530632+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:03:59.530738+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:00.530844+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:01.530946+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:02.531095+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:03.531201+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:04.531325+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:05.531464+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:06.531584+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:07.531678+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:08.531773+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:09.531863+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:10.531927+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:11.532023+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:12.532155+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:13.532269+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:14.532373+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:15.532469+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: mgrc ms_handle_reset ms_handle_reset con 0x564fb3c44000
Nov 25 10:11:17 compute-0 ceph-osd[82261]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/92811439
Nov 25 10:11:17 compute-0 ceph-osd[82261]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/92811439,v1:192.168.122.100:6801/92811439]
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: get_auth_request con 0x564fb6634400 auth_method 0
Nov 25 10:11:17 compute-0 ceph-osd[82261]: mgrc handle_mgr_configure stats_period=5
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 38920192 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:16.532598+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 38920192 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:17.532699+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 38920192 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:18.532825+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 38920192 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:19.532958+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 38920192 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:20.533090+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:21.533307+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:22.533476+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:23.533666+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:24.533803+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 38977536 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:25.533995+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:26.534168+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:27.534328+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:28.534475+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:29.534656+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:30.534818+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:31.534952+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:32.535133+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:33.535250+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:34.535420+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:35.535535+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:36.535702+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:37.535819+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:38.535936+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:39.536058+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:40.536159+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:41.536246+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:42.536365+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:43.536458+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:44.536608+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:45.536725+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:46.536851+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:47.536974+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:48.537086+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:49.537202+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:50.537295+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:51.537428+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:52.537597+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:53.537726+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:54.537881+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:55.538067+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:56.538198+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:57.538298+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:58.538428+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:04:59.538569+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:00.538679+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:01.538786+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:02.538941+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:03.539075+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:04.539176+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:05.539312+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:06.539410+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:07.539531+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:08.539629+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:09.539783+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:10.539921+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:11.540052+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:12.540209+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:13.540314+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:14.540419+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:15.540513+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:16.541158+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:17.541266+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:18.541408+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:19.541542+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:20.541648+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:21.541740+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:17 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:17 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:22.541885+0000)
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:17 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:23.542022+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:24.542160+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:25.542278+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:26.542434+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:27.542610+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:28.542764+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:29.542882+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:30.543050+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:31.543146+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:32.543334+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:33.543451+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:34.543652+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:35.543754+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:36.543946+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:37.544060+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:38.544300+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:39.544440+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:40.544589+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:41.544698+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:42.544854+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:43.544960+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:44.545093+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:45.545201+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:46.545358+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:47.545506+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:48.545643+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:49.545789+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:50.545950+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:51.546115+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:52.546309+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:53.546404+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:54.546532+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:55.546675+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:56.546815+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:57.546940+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:58.547079+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:05:59.547184+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:00.547336+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:01.547502+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:02.547687+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:03.547849+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:04.548037+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:05.548192+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:06.548365+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:07.548516+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:08.548680+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:09.548836+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:10.549011+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:11.549186+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:12.549396+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:13.549552+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:14.549749+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:15.549997+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:16.550158+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:17.550276+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:18.550393+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:19.550504+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111042560 unmapped: 39092224 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:20.550616+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:21.550740+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:22.550921+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:23.551057+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:24.551209+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:25.551316+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:26.551447+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:27.551575+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:28.551705+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:29.551863+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:30.552022+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:31.552422+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:32.552665+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:33.552769+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:34.552949+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:35.553073+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:36.553205+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:37.553293+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:38.553466+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:39.553603+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 39084032 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:40.553751+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 39075840 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:41.553865+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 39075840 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:42.554045+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 39075840 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:43.554185+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 39075840 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:44.554346+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 39075840 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:45.554467+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 39075840 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:46.554600+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 39075840 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:47.554709+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 39075840 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:48.554846+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 39075840 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:49.554962+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 39075840 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:50.555087+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 39075840 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:51.555197+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 39075840 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:52.555376+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:53.555497+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:54.555674+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:55.555797+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:56.555961+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:57.556026+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:58.556122+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:06:59.556246+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:00.556345+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:01.556459+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:02.556632+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:03.556728+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:04.556848+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:05.556953+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:06.557248+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:07.557345+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 39067648 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:08.557474+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:09.557584+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:10.557711+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:11.557817+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:12.558009+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:13.558127+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:14.558278+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:15.558389+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:16.558523+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:17.558650+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:18.558822+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:19.558951+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:20.559097+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:21.559212+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:22.559413+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:23.559540+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 39059456 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:24.559714+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 39051264 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:25.559858+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 39051264 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:26.560043+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 39051264 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:27.560175+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111083520 unmapped: 39051264 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:28.560336+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:29.560458+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:30.560633+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:31.560765+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:32.560989+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:33.561158+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:34.561396+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:35.561586+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:36.561766+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:37.561934+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:38.562162+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:39.562331+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:40.562471+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 39247872 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:41.562624+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:42.562777+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:43.562879+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:44.563028+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:45.563143+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:46.563296+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:47.563403+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:48.563539+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:49.563711+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:50.563808+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:51.563934+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:52.564125+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:53.564278+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:54.564419+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:55.564592+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 39239680 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:56.564731+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 39231488 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:57.564927+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 39231488 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:58.565035+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 39231488 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:07:59.565162+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 39231488 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:00.565339+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 39231488 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:01.565508+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 39231488 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:02.565691+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 39231488 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:03.565823+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 39231488 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:04.565927+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 39231488 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:05.566060+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 39231488 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:06.566189+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:07.566338+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:08.566483+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:09.566648+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:10.566799+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:11.566945+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:12.567116+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:13.567257+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:14.567418+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:15.567558+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:16.567694+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:17.567824+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:18.567951+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:19.568072+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:20.568216+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 39223296 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:21.568351+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:22.568500+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:23.568606+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:24.568772+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:25.569031+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:26.569133+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:27.569305+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:28.569394+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:29.569510+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:30.569626+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:31.570712+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:32.570827+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:33.570928+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:34.571088+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:35.571254+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 39215104 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:36.571371+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 39206912 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:37.571509+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 39206912 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:38.571644+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 39206912 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:39.571752+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 39206912 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:40.571847+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 39206912 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:41.572008+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 39206912 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:42.572145+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 39206912 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:43.572231+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110927872 unmapped: 39206912 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:44.572387+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 39198720 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:45.572545+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 39198720 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:46.572709+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 39198720 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:47.572877+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 39198720 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:48.573022+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 39198720 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:49.573161+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 39198720 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:50.573304+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 39198720 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:51.573761+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 39198720 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:52.573941+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 39190528 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:53.574103+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 39190528 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:54.574258+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:55.574355+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:56.574479+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:57.574610+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:58.574735+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:08:59.574858+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:00.574989+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:01.575085+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:02.575217+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:03.575790+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:04.575961+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:05.576072+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:06.576183+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:07.576351+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 39182336 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:08.576491+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 39174144 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:09.576593+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 39174144 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:10.576696+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 39174144 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:11.576835+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 39174144 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:12.576995+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 39174144 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:13.577141+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 39174144 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:14.577258+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 39174144 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:15.577381+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 39174144 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:16.577510+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 39174144 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:17.577596+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 39174144 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:18.577697+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 39174144 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:19.577791+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110968832 unmapped: 39165952 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:20.577946+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110977024 unmapped: 39157760 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:21.578067+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110977024 unmapped: 39157760 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:22.578194+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110977024 unmapped: 39157760 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:23.578355+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110977024 unmapped: 39157760 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:24.578516+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110977024 unmapped: 39157760 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:25.578657+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110977024 unmapped: 39157760 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:26.578817+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110977024 unmapped: 39157760 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:27.578947+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110977024 unmapped: 39157760 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:28.579068+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110977024 unmapped: 39157760 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:29.579226+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110977024 unmapped: 39157760 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:30.579396+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110977024 unmapped: 39157760 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:31.579568+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110977024 unmapped: 39157760 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:32.579706+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:33.579814+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:34.579944+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:35.580255+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:36.580390+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:37.580532+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:38.580832+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:39.580946+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:40.581116+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:41.581253+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:42.581407+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:43.581505+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:44.581608+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 39149568 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:45.581729+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 110993408 unmapped: 39141376 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:46.581884+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:47.582021+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:48.582157+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:49.582301+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:50.582445+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:51.582557+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:52.582744+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:53.582927+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:54.583023+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:55.583155+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 39133184 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:56.583296+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:57.583460+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:58.583623+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:09:59.583785+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:00.583955+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:01.584083+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:02.584238+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:03.584342+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:04.584458+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:05.584605+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:06.585159+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:07.585263+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 39124992 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:08.585400+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:09.585540+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:10.585654+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:11.585743+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 39116800 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:12.585855+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:13.585986+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:14.586116+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:15.586248+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:16.586383+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:17.586475+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:18.586618+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:19.586736+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:20.586875+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 39108608 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:21.586940+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:22.587136+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:23.587294+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:24.587465+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:25.587631+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:26.587739+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:27.587929+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:28.588084+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:29.588158+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:30.588313+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:31.588438+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:32.588565+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:33.588680+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:34.588787+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:35.588908+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:36.589011+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:37.589066+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:38.589168+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:39.589270+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:40.589369+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:41.589418+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:42.589528+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa495000/0x0/0x4ffc00000, data 0x9081b2/0x9b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:43.589622+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 10:11:18 compute-0 ceph-osd[82261]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138598 data_alloc: 218103808 data_used: 167936
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:44.589721+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 39100416 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:45.589818+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: do_command 'config diff' '{prefix=config diff}'
Nov 25 10:11:18 compute-0 ceph-osd[82261]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 10:11:18 compute-0 ceph-osd[82261]: do_command 'config show' '{prefix=config show}'
Nov 25 10:11:18 compute-0 ceph-osd[82261]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 10:11:18 compute-0 ceph-osd[82261]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 10:11:18 compute-0 ceph-osd[82261]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 10:11:18 compute-0 ceph-osd[82261]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 10:11:18 compute-0 ceph-osd[82261]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111140864 unmapped: 38993920 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:46.589922+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 38682624 heap: 150134784 old mem: 2845415832 new mem: 2845415832
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: tick
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_tickets
Nov 25 10:11:18 compute-0 ceph-osd[82261]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T10:10:47.590014+0000)
Nov 25 10:11:18 compute-0 ceph-osd[82261]: do_command 'log dump' '{prefix=log dump}'
Nov 25 10:11:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:11:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 25 10:11:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941136760' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 25 10:11:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162767992' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 25 10:11:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3748325728' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2257282358' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.19506 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3066847444' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3328015327' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1334765920' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/915986928' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2740963012' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2865881099' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3855566086' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3776552969' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4252505220' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1708222253' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1674075659' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2311045081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3984037843' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/941136760' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4162767992' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3748325728' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:11:18 compute-0 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:11:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 25 10:11:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3558542597' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29369 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 25 10:11:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2067234885' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29387 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 crontab[291135]: (root) LIST (root)
Nov 25 10:11:18 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 25 10:11:18 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3057625564' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:11:18 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:18.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:18.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:18.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:18 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:18.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:18 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29405 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19611 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:19.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:19 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29242 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:19 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:19 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:19 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:19.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 25 10:11:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931407246' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 25 10:11:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147863761' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29447 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/627438796' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2856483859' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/898889878' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3558542597' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.29369 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2067234885' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.29387 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3364237' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/891235372' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3057625564' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.29405 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4030864340' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.19611 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2073664472' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3931407246' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/147863761' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29266 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29272 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 25 10:11:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208949389' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 25 10:11:19 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921274166' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29296 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19677 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:19 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29501 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:19 compute-0 nova_compute[253512]: 2025-11-25 10:11:19.975 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29320 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19704 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-0-zcfgby[74472]: ::ffff:192.168.122.100 - - [25/Nov/2025:10:11:20] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:11:20 compute-0 ceph-mgr[74476]: [prometheus INFO cherrypy.access.140045298020160] ::ffff:192.168.122.100 - - [25/Nov/2025:10:11:20] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Nov 25 10:11:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29519 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19698 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.29242 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.29447 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.29266 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.29272 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1208949389' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/106983101' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/921274166' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.29474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.29296 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.19677 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2504291124' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.29501 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1094097717' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1644087196' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/526553241' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29350 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 systemd[1]: Starting Hostname Service...
Nov 25 10:11:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19719 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 systemd[1]: Started Hostname Service.
Nov 25 10:11:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29540 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29374 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:20 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:20 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19740 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:20 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 25 10:11:20 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2413847078' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 25 10:11:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887929255' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:11:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:21.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29395 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:21 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:21 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:21.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19779 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.29320 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.19704 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.29519 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.19698 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.29350 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.19719 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3552601814' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.29540 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2191149956' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.29374 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:21 compute-0 ceph-mon[74207]: pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.19740 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2413847078' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1887929255' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:21 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3774492437' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 nova_compute[253512]: 2025-11-25 10:11:21.369 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29428 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 25 10:11:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483204903' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19809 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29606 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29458 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:21 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 25 10:11:21 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2533429017' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19863 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 25 10:11:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4083160143' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29651 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='client.29395 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='client.19779 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4115573552' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='client.29428 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3483204903' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='client.19809 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='client.29606 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='client.29458 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2533429017' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/4227443116' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1685573185' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4083160143' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 25 10:11:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/101408041' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:11:22 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:11:22 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 25 10:11:22 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3895785370' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:11:23 compute-0 podman[291910]: 2025-11-25 10:11:23.030913547 +0000 UTC m=+0.091729944 container health_status bb2ad46b031dbc871c01c5a034271a984f8ab4a2b0cc2c21fef4f669a0b94e39 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:11:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 10:11:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:23.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 10:11:23 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:23 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:23 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:23.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 10:11:23 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.19932 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='client.19863 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='client.29651 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1268220495' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/101408041' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1199518046' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:11:23 compute-0 ceph-mon[74207]: pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3895785370' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2843991185' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:11:23 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/300601334' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:11:23 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29711 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:23 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 25 10:11:23 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4223628237' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 25 10:11:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1988260548' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29593 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29756 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mon[74207]: from='client.19932 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mon[74207]: from='client.29711 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1860618778' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4223628237' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/4266040070' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2335435487' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1988260548' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1551765300' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/3553712323' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 10:11:24 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 25 10:11:24 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4053656256' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:11:24 compute-0 sudo[292171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 10:11:24 compute-0 sudo[292171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 10:11:24 compute-0 sudo[292171]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:24 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:24 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29780 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:24 compute-0 nova_compute[253512]: 2025-11-25 10:11:24.976 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:25 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.20013 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:25.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:25 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29641 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:25 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:25 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:25 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:25.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:25 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29789 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 25 10:11:25 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1476682068' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mon[74207]: from='client.29593 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mon[74207]: from='client.29756 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/2790692213' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/3668990915' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/4053656256' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/286976784' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mon[74207]: pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 25 10:11:25 compute-0 ceph-mon[74207]: from='client.29780 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mon[74207]: from='client.20013 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/3869111386' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1476682068' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29656 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Nov 25 10:11:25 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255381670' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:11:25 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29674 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.20055 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29831 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Nov 25 10:11:26 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145562190' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 10:11:26 compute-0 nova_compute[253512]: 2025-11-25 10:11:26.371 253516 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:11:26 compute-0 ceph-mon[74207]: from='client.29641 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mon[74207]: from='client.29789 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mon[74207]: from='client.29656 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1824040594' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/1255381670' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mon[74207]: from='client.29674 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/1956155468' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mon[74207]: from='client.20055 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/2145562190' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1766471432' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29843 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.20082 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: log_channel(cluster) log [DBG] : pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:11:26 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.20097 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:27.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29722 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:27 compute-0 radosgw[89491]: ====== starting new request req=0x7ff15b1b05d0 =====
Nov 25 10:11:27 compute-0 radosgw[89491]: ====== req done req=0x7ff15b1b05d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 10:11:27 compute-0 radosgw[89491]: beast: 0x7ff15b1b05d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:27.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 10:11:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:27.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:27.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534696.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534696.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:27.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534695.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534695.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:27 compute-0 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-alertmanager-compute-0[105164]: ts=2025-11-25T10:11:27.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005534694.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005534694.shiftstack on 192.168.122.80:53: no such host"
Nov 25 10:11:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Nov 25 10:11:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282644117' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29737 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 10:11:27 compute-0 ceph-mon[74207]: from='client.29831 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mon[74207]: from='client.29843 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mon[74207]: from='client.20082 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.102:0/1296083098' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mon[74207]: pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 25 10:11:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2747244808' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mon[74207]: from='client.20097 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.101:0/2413173181' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mon[74207]: from='client.? 192.168.122.100:0/282644117' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29888 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mon[74207]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Nov 25 10:11:27 compute-0 ceph-mon[74207]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1173905002' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29894 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:11:27 compute-0 ceph-mgr[74476]: log_channel(audit) log [DBG] : from='client.29900 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
